2023-03-08 13:46:28,357 INFO [train.py:970] (3/4) Training started 2023-03-08 13:46:28,357 INFO [train.py:980] (3/4) Device: cuda:3 2023-03-08 13:46:28,366 INFO [train.py:989] (3/4) {'frame_shift_ms': 10.0, 'allowed_excess_duration_ratio': 0.1, 'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.22', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '96c9a2aece2a3a7633da07740e24fa3d96f5498c', 'k2-git-date': 'Thu Nov 10 08:14:02 2022', 'lhotse-version': '1.13.0.dev+git.527d964.clean', 'torch-version': '1.12.1', 'torch-cuda-available': True, 'torch-cuda-version': '11.6', 'python-version': '3.8', 'icefall-git-branch': 'random_padding', 'icefall-git-sha1': '4cf2472-dirty', 'icefall-git-date': 'Wed Mar 1 23:53:23 2023', 'icefall-path': '/ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_random_padding', 'k2-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_latest/lib/python3.8/site-packages/k2/__init__.py', 'lhotse-path': '/ceph-data4/yangxiaoyu/softwares/lhotse_development/lhotse_random_padding_left/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-9-0208143539-7dcb6bfd79-b6fdq', 'IP address': '10.177.13.150'}, 'world_size': 4, 'master_port': 18180, 'tensorboard': True, 'num_epochs': 30, 'start_epoch': 1, 'start_batch': 0, 'exp_dir': PosixPath('pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'base_lr': 0.05, 'lr_batches': 5000, 'lr_epochs': 3.5, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'seed': 42, 'print_diagnostics': False, 'inf_check': False, 'save_every_n': 2000, 'keep_last_k': 30, 'average_period': 200, 'use_fp16': True, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 750, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'random_left_padding': False, 'num_left_padding': 8, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'blank_id': 0, 'vocab_size': 500} 2023-03-08 13:46:28,367 INFO [train.py:991] (3/4) About to create model 2023-03-08 13:46:29,230 INFO [zipformer.py:178] (3/4) At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8. 2023-03-08 13:46:29,254 INFO [train.py:995] (3/4) Number of model parameters: 70369391 2023-03-08 13:46:32,459 INFO [train.py:1010] (3/4) Using DDP 2023-03-08 13:46:32,865 INFO [asr_datamodule.py:439] (3/4) About to get the shuffled train-clean-100, train-clean-360 and train-other-500 cuts 2023-03-08 13:46:32,868 INFO [asr_datamodule.py:244] (3/4) Enable MUSAN 2023-03-08 13:46:32,868 INFO [asr_datamodule.py:245] (3/4) About to get Musan cuts 2023-03-08 13:46:35,230 INFO [asr_datamodule.py:269] (3/4) Enable SpecAugment 2023-03-08 13:46:35,231 INFO [asr_datamodule.py:270] (3/4) Time warp factor: 80 2023-03-08 13:46:35,231 INFO [asr_datamodule.py:280] (3/4) Num frame mask: 10 2023-03-08 13:46:35,231 INFO [asr_datamodule.py:293] (3/4) About to create train dataset 2023-03-08 13:46:35,231 INFO [asr_datamodule.py:320] (3/4) Using DynamicBucketingSampler. 2023-03-08 13:46:42,028 INFO [asr_datamodule.py:335] (3/4) About to create train dataloader 2023-03-08 13:46:42,029 INFO [asr_datamodule.py:449] (3/4) About to get dev-clean cuts 2023-03-08 13:46:42,031 INFO [asr_datamodule.py:456] (3/4) About to get dev-other cuts 2023-03-08 13:46:42,032 INFO [asr_datamodule.py:366] (3/4) About to create dev dataset 2023-03-08 13:46:42,373 INFO [asr_datamodule.py:383] (3/4) About to create dev dataloader 2023-03-08 13:47:06,862 INFO [train.py:898] (3/4) Epoch 1, batch 0, loss[loss=7.351, simple_loss=6.655, pruned_loss=6.944, over 18427.00 frames. ], tot_loss[loss=7.351, simple_loss=6.655, pruned_loss=6.944, over 18427.00 frames. ], batch size: 42, lr: 2.50e-02, grad_scale: 2.0 2023-03-08 13:47:06,863 INFO [train.py:923] (3/4) Computing validation loss 2023-03-08 13:47:18,762 INFO [train.py:932] (3/4) Epoch 1, validation: loss=6.911, simple_loss=6.237, pruned_loss=6.721, over 944034.00 frames. 2023-03-08 13:47:18,762 INFO [train.py:933] (3/4) Maximum memory allocated so far is 15059MB 2023-03-08 13:47:19,995 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=3.17 vs. limit=2.0 2023-03-08 13:47:22,925 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=5.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-08 13:47:25,450 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=4.55 vs. limit=2.0 2023-03-08 13:47:32,809 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([6.5482, 6.3390, 6.5762, 6.5597, 6.5777, 6.5736, 6.5591, 6.5487], device='cuda:3'), covar=tensor([0.0048, 0.0036, 0.0113, 0.0073, 0.0053, 0.0069, 0.0025, 0.0060], device='cuda:3'), in_proj_covar=tensor([0.0009, 0.0009, 0.0009, 0.0009, 0.0009, 0.0009, 0.0009, 0.0009], device='cuda:3'), out_proj_covar=tensor([9.0215e-06, 9.0436e-06, 8.9675e-06, 8.8148e-06, 9.0246e-06, 8.8975e-06, 8.7731e-06, 8.9113e-06], device='cuda:3') 2023-03-08 13:47:42,163 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=23.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 13:47:48,455 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.3588, 4.3574, 4.3365, 4.2862, 4.3536, 4.3275, 4.3471, 4.3560], device='cuda:3'), covar=tensor([0.0010, 0.0007, 0.0008, 0.0009, 0.0013, 0.0011, 0.0011, 0.0007], device='cuda:3'), in_proj_covar=tensor([0.0009, 0.0009, 0.0009, 0.0009, 0.0009, 0.0009, 0.0009, 0.0009], device='cuda:3'), out_proj_covar=tensor([8.9347e-06, 8.8633e-06, 9.1289e-06, 8.8904e-06, 9.1215e-06, 8.9558e-06, 9.0463e-06, 9.0285e-06], device='cuda:3') 2023-03-08 13:48:01,735 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.7421, 5.6897, 5.6676, 5.7102, 5.7219, 5.7381, 5.7322, 5.6897], device='cuda:3'), covar=tensor([0.0006, 0.0005, 0.0009, 0.0008, 0.0005, 0.0008, 0.0007, 0.0007], device='cuda:3'), in_proj_covar=tensor([0.0009, 0.0009, 0.0009, 0.0009, 0.0009, 0.0009, 0.0009, 0.0009], device='cuda:3'), out_proj_covar=tensor([9.1599e-06, 9.2839e-06, 9.1385e-06, 9.3348e-06, 9.1633e-06, 9.2085e-06, 9.2734e-06, 9.2691e-06], device='cuda:3') 2023-03-08 13:48:05,138 INFO [train.py:898] (3/4) Epoch 1, batch 50, loss[loss=1.352, simple_loss=1.196, pruned_loss=1.395, over 17971.00 frames. ], tot_loss[loss=2.137, simple_loss=1.932, pruned_loss=1.961, over 820514.02 frames. ], batch size: 65, lr: 2.75e-02, grad_scale: 1.0 2023-03-08 13:48:08,652 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=46.98 vs. limit=5.0 2023-03-08 13:48:18,840 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=5.52 vs. limit=2.0 2023-03-08 13:48:33,966 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=83.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 13:48:37,890 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=5.69 vs. limit=2.0 2023-03-08 13:48:46,638 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=5.32 vs. limit=2.0 2023-03-08 13:48:50,045 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=14.64 vs. limit=2.0 2023-03-08 13:48:50,707 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=4.13 vs. limit=2.0 2023-03-08 13:48:51,689 WARNING [train.py:888] (3/4) Grad scale is small: 0.0009765625 2023-03-08 13:48:51,689 INFO [train.py:898] (3/4) Epoch 1, batch 100, loss[loss=1.178, simple_loss=1.018, pruned_loss=1.272, over 12517.00 frames. ], tot_loss[loss=1.631, simple_loss=1.451, pruned_loss=1.615, over 1418158.43 frames. ], batch size: 130, lr: 3.00e-02, grad_scale: 0.001953125 2023-03-08 13:48:56,806 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=10.30 vs. limit=2.0 2023-03-08 13:49:00,733 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 6.526e+01 1.420e+02 2.842e+02 1.227e+03 3.323e+06, threshold=5.685e+02, percent-clipped=0.0 2023-03-08 13:49:11,410 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=4.59 vs. limit=2.0 2023-03-08 13:49:18,552 WARNING [optim.py:389] (3/4) Scaling gradients by 0.03670352324843407, model_norm_threshold=568.4981689453125 2023-03-08 13:49:18,712 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.51, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=1.227e+08, grad_sumsq = 3.226e+09, orig_rms_sq=3.802e-02 2023-03-08 13:49:29,073 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=144.0, num_to_drop=2, layers_to_drop={0, 3} 2023-03-08 13:49:34,273 INFO [train.py:898] (3/4) Epoch 1, batch 150, loss[loss=1.076, simple_loss=0.9051, pruned_loss=1.224, over 18477.00 frames. ], tot_loss[loss=1.405, simple_loss=1.229, pruned_loss=1.471, over 1910531.52 frames. ], batch size: 59, lr: 3.25e-02, grad_scale: 0.001953125 2023-03-08 13:49:44,799 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=9.84 vs. limit=2.0 2023-03-08 13:50:01,971 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=3.25 vs. limit=2.0 2023-03-08 13:50:06,730 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=2.99 vs. limit=2.0 2023-03-08 13:50:13,270 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=3.37 vs. limit=2.0 2023-03-08 13:50:16,353 WARNING [train.py:888] (3/4) Grad scale is small: 0.001953125 2023-03-08 13:50:16,354 INFO [train.py:898] (3/4) Epoch 1, batch 200, loss[loss=1.03, simple_loss=0.8601, pruned_loss=1.117, over 18500.00 frames. ], tot_loss[loss=1.268, simple_loss=1.097, pruned_loss=1.348, over 2281591.46 frames. ], batch size: 51, lr: 3.50e-02, grad_scale: 0.00390625 2023-03-08 13:50:32,163 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 7.044e+01 1.307e+02 2.204e+02 4.914e+02 1.549e+04, threshold=4.408e+02, percent-clipped=23.0 2023-03-08 13:50:53,833 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=46.11 vs. limit=5.0 2023-03-08 13:51:03,334 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=38.37 vs. limit=5.0 2023-03-08 13:51:05,108 INFO [train.py:898] (3/4) Epoch 1, batch 250, loss[loss=1.015, simple_loss=0.8592, pruned_loss=0.983, over 12729.00 frames. ], tot_loss[loss=1.18, simple_loss=1.012, pruned_loss=1.249, over 2577777.29 frames. ], batch size: 130, lr: 3.75e-02, grad_scale: 0.00390625 2023-03-08 13:51:37,152 WARNING [optim.py:389] (3/4) Scaling gradients by 0.0006386953755281866, model_norm_threshold=440.7669677734375 2023-03-08 13:51:37,319 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.skip_modules.4.weight1 with proportion 0.43, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=2.042e+11, grad_sumsq = 2.042e+11, orig_rms_sq=1.000e+00 2023-03-08 13:51:43,343 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=296.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 13:51:46,144 WARNING [optim.py:389] (3/4) Scaling gradients by 0.04052559658885002, model_norm_threshold=440.7669677734375 2023-03-08 13:51:46,323 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.77, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=9.126e+07, grad_sumsq = 1.809e+09, orig_rms_sq=5.045e-02 2023-03-08 13:51:46,609 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=300.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-08 13:51:47,137 WARNING [train.py:888] (3/4) Grad scale is small: 0.00390625 2023-03-08 13:51:47,137 INFO [train.py:898] (3/4) Epoch 1, batch 300, loss[loss=0.9484, simple_loss=0.783, pruned_loss=0.9524, over 18257.00 frames. ], tot_loss[loss=1.118, simple_loss=0.9517, pruned_loss=1.168, over 2811740.23 frames. ], batch size: 47, lr: 4.00e-02, grad_scale: 0.0078125 2023-03-08 13:51:55,816 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 8.726e+01 1.552e+02 2.190e+02 3.865e+02 6.901e+05, threshold=4.380e+02, percent-clipped=20.0 2023-03-08 13:52:21,632 WARNING [optim.py:389] (3/4) Scaling gradients by 0.00015154901484493166, model_norm_threshold=438.01873779296875 2023-03-08 13:52:21,876 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.70, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=5.875e+12, grad_sumsq = 1.517e+14, orig_rms_sq=3.874e-02 2023-03-08 13:52:23,425 WARNING [optim.py:389] (3/4) Scaling gradients by 0.01597026363015175, model_norm_threshold=438.01873779296875 2023-03-08 13:52:23,586 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.80, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=6.028e+08, grad_sumsq = 1.556e+10, orig_rms_sq=3.874e-02 2023-03-08 13:52:24,510 WARNING [optim.py:389] (3/4) Scaling gradients by 0.022203104570508003, model_norm_threshold=438.01873779296875 2023-03-08 13:52:24,680 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.86, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=3.351e+08, grad_sumsq = 8.650e+09, orig_rms_sq=3.874e-02 2023-03-08 13:52:26,182 WARNING [optim.py:389] (3/4) Scaling gradients by 0.008352968841791153, model_norm_threshold=438.01873779296875 2023-03-08 13:52:26,425 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.77, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=2.124e+09, grad_sumsq = 5.347e+10, orig_rms_sq=3.973e-02 2023-03-08 13:52:30,360 INFO [train.py:898] (3/4) Epoch 1, batch 350, loss[loss=0.9173, simple_loss=0.7492, pruned_loss=0.9062, over 18356.00 frames. ], tot_loss[loss=1.075, simple_loss=0.908, pruned_loss=1.11, over 2988309.74 frames. ], batch size: 46, lr: 4.25e-02, grad_scale: 0.00390625 2023-03-08 13:52:31,231 WARNING [optim.py:389] (3/4) Scaling gradients by 0.00011210949014639482, model_norm_threshold=438.01873779296875 2023-03-08 13:52:31,402 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.85, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=1.296e+13, grad_sumsq = 3.160e+14, orig_rms_sq=4.100e-02 2023-03-08 13:52:34,661 WARNING [optim.py:389] (3/4) Scaling gradients by 0.06386774033308029, model_norm_threshold=438.01873779296875 2023-03-08 13:52:34,825 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.58, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=2.733e+07, grad_sumsq = 6.508e+08, orig_rms_sq=4.200e-02 2023-03-08 13:52:35,610 WARNING [optim.py:389] (3/4) Scaling gradients by 0.0002155240799766034, model_norm_threshold=438.01873779296875 2023-03-08 13:52:35,769 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.85, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=3.507e+12, grad_sumsq = 8.280e+13, orig_rms_sq=4.236e-02 2023-03-08 13:52:36,186 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=357.0, num_to_drop=2, layers_to_drop={1, 3} 2023-03-08 13:52:43,646 WARNING [optim.py:389] (3/4) Scaling gradients by 0.08033499121665955, model_norm_threshold=438.01873779296875 2023-03-08 13:52:43,822 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.60, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=1.796e+07, grad_sumsq = 4.255e+08, orig_rms_sq=4.221e-02 2023-03-08 13:52:44,553 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=4.92 vs. limit=2.0 2023-03-08 13:52:53,963 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=4.35 vs. limit=2.0 2023-03-08 13:52:55,570 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=50.86 vs. limit=5.0 2023-03-08 13:52:58,806 WARNING [optim.py:389] (3/4) Scaling gradients by 0.00024505704641342163, model_norm_threshold=438.01873779296875 2023-03-08 13:52:58,964 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.67, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=2.127e+12, grad_sumsq = 5.153e+13, orig_rms_sq=4.128e-02 2023-03-08 13:53:00,811 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=387.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 13:53:01,259 WARNING [optim.py:389] (3/4) Scaling gradients by 0.0035240945871919394, model_norm_threshold=438.01873779296875 2023-03-08 13:53:02,222 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.51, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=7.954e+09, grad_sumsq = 1.927e+11, orig_rms_sq=4.128e-02 2023-03-08 13:53:02,950 WARNING [optim.py:389] (3/4) Scaling gradients by 0.00012842776777688414, model_norm_threshold=438.01873779296875 2023-03-08 13:53:03,111 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.65, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=7.588e+12, grad_sumsq = 1.863e+14, orig_rms_sq=4.072e-02 2023-03-08 13:53:10,043 WARNING [optim.py:389] (3/4) Scaling gradients by 0.007196913007646799, model_norm_threshold=438.01873779296875 2023-03-08 13:53:10,210 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoders.4.encoder.layers.0.norm_final.eps with proportion 0.38, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=1.420e+09, grad_sumsq = 1.420e+09, orig_rms_sq=1.000e+00 2023-03-08 13:53:13,992 WARNING [train.py:888] (3/4) Grad scale is small: 0.00390625 2023-03-08 13:53:13,993 INFO [train.py:898] (3/4) Epoch 1, batch 400, loss[loss=0.985, simple_loss=0.7948, pruned_loss=0.9622, over 18631.00 frames. ], tot_loss[loss=1.046, simple_loss=0.8755, pruned_loss=1.066, over 3113427.98 frames. ], batch size: 52, lr: 4.50e-02, grad_scale: 0.0078125 2023-03-08 13:53:14,553 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=51.10 vs. limit=5.0 2023-03-08 13:53:23,505 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.343e+02 2.188e+02 3.037e+02 6.402e+02 3.907e+06, threshold=6.074e+02, percent-clipped=33.0 2023-03-08 13:53:44,546 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=439.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-08 13:53:53,242 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=448.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-08 13:53:55,311 INFO [train.py:898] (3/4) Epoch 1, batch 450, loss[loss=0.9392, simple_loss=0.751, pruned_loss=0.9006, over 18360.00 frames. ], tot_loss[loss=1.024, simple_loss=0.8496, pruned_loss=1.03, over 3217416.52 frames. ], batch size: 50, lr: 4.75e-02, grad_scale: 0.0078125 2023-03-08 13:53:59,209 WARNING [optim.py:389] (3/4) Scaling gradients by 0.001993334386497736, model_norm_threshold=607.3988037109375 2023-03-08 13:53:59,368 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoders.4.encoder.layers.1.norm_final.eps with proportion 0.46, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=4.249e+10, grad_sumsq = 4.249e+10, orig_rms_sq=1.000e+00 2023-03-08 13:54:00,199 WARNING [optim.py:389] (3/4) Scaling gradients by 0.009787621907889843, model_norm_threshold=607.3988037109375 2023-03-08 13:54:00,353 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoders.4.encoder.layers.0.norm_final.eps with proportion 0.37, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=1.407e+09, grad_sumsq = 1.407e+09, orig_rms_sq=1.000e+00 2023-03-08 13:54:09,173 WARNING [optim.py:389] (3/4) Scaling gradients by 0.07029950618743896, model_norm_threshold=607.3988037109375 2023-03-08 13:54:09,329 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.83, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=6.195e+07, grad_sumsq = 1.377e+09, orig_rms_sq=4.498e-02 2023-03-08 13:54:19,288 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=9.29 vs. limit=2.0 2023-03-08 13:54:20,905 WARNING [optim.py:389] (3/4) Scaling gradients by 0.008813662454485893, model_norm_threshold=607.3988037109375 2023-03-08 13:54:21,062 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.85, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=4.028e+09, grad_sumsq = 8.824e+10, orig_rms_sq=4.564e-02 2023-03-08 13:54:26,875 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=5.08 vs. limit=2.0 2023-03-08 13:54:27,985 WARNING [optim.py:389] (3/4) Scaling gradients by 0.024284733459353447, model_norm_threshold=607.3988037109375 2023-03-08 13:54:28,151 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.83, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=5.221e+08, grad_sumsq = 1.167e+10, orig_rms_sq=4.473e-02 2023-03-08 13:54:38,281 WARNING [optim.py:389] (3/4) Scaling gradients by 0.0006707996362820268, model_norm_threshold=607.3988037109375 2023-03-08 13:54:38,442 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoders.4.encoder.layers.1.norm_final.eps with proportion 0.69, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=5.647e+11, grad_sumsq = 5.647e+11, orig_rms_sq=1.000e+00 2023-03-08 13:54:38,471 WARNING [train.py:888] (3/4) Grad scale is small: 0.0078125 2023-03-08 13:54:38,472 INFO [train.py:898] (3/4) Epoch 1, batch 500, loss[loss=0.9036, simple_loss=0.7077, pruned_loss=0.8745, over 18430.00 frames. ], tot_loss[loss=1.014, simple_loss=0.8316, pruned_loss=1.01, over 3310147.85 frames. ], batch size: 43, lr: 4.99e-02, grad_scale: 0.015625 2023-03-08 13:54:42,462 WARNING [optim.py:389] (3/4) Scaling gradients by 0.006503281649202108, model_norm_threshold=607.3988037109375 2023-03-08 13:54:42,624 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoders.1.out_combiner.weight1 with proportion 0.48, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=4.173e+09, grad_sumsq = 4.173e+09, orig_rms_sq=1.000e+00 2023-03-08 13:54:48,223 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.357e+02 2.649e+02 4.541e+02 8.074e+02 9.055e+05, threshold=9.081e+02, percent-clipped=35.0 2023-03-08 13:54:48,224 WARNING [optim.py:389] (3/4) Scaling gradients by 0.07500762492418289, model_norm_threshold=908.1141357421875 2023-03-08 13:54:48,403 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoders.1.out_combiner.weight1 with proportion 0.80, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=1.178e+08, grad_sumsq = 1.178e+08, orig_rms_sq=1.000e+00 2023-03-08 13:54:53,818 WARNING [optim.py:389] (3/4) Scaling gradients by 0.00848373118788004, model_norm_threshold=908.1141357421875 2023-03-08 13:54:53,974 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoders.2.out_combiner.weight1 with proportion 0.43, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=4.921e+09, grad_sumsq = 4.921e+09, orig_rms_sq=1.000e+00 2023-03-08 13:54:54,845 WARNING [optim.py:389] (3/4) Scaling gradients by 0.0037236642092466354, model_norm_threshold=908.1141357421875 2023-03-08 13:54:55,004 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.skip_modules.4.weight1 with proportion 0.69, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=4.127e+10, grad_sumsq = 4.127e+10, orig_rms_sq=1.000e+00 2023-03-08 13:55:00,560 WARNING [optim.py:389] (3/4) Scaling gradients by 0.0036443807184696198, model_norm_threshold=908.1141357421875 2023-03-08 13:55:00,738 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.88, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=5.460e+10, grad_sumsq = 1.061e+12, orig_rms_sq=5.145e-02 2023-03-08 13:55:01,572 WARNING [optim.py:389] (3/4) Scaling gradients by 0.004900030791759491, model_norm_threshold=908.1141357421875 2023-03-08 13:55:01,730 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.35, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=1.186e+10, grad_sumsq = 2.280e+11, orig_rms_sq=5.204e-02 2023-03-08 13:55:02,554 WARNING [optim.py:389] (3/4) Scaling gradients by 0.07598941773176193, model_norm_threshold=908.1141357421875 2023-03-08 13:55:02,716 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.93, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=1.329e+08, grad_sumsq = 2.553e+09, orig_rms_sq=5.204e-02 2023-03-08 13:55:05,076 WARNING [optim.py:389] (3/4) Scaling gradients by 0.028503550216555595, model_norm_threshold=908.1141357421875 2023-03-08 13:55:05,233 INFO [optim.py:451] (3/4) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.78, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=7.871e+08, grad_sumsq = 1.620e+10, orig_rms_sq=4.859e-02 2023-03-08 13:55:21,081 INFO [train.py:898] (3/4) Epoch 1, batch 550, loss[loss=0.9036, simple_loss=0.7019, pruned_loss=0.8576, over 18421.00 frames. ], tot_loss[loss=1.003, simple_loss=0.8153, pruned_loss=0.9876, over 3357645.21 frames. ], batch size: 48, lr: 4.98e-02, grad_scale: 0.015625 2023-03-08 13:55:29,957 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=562.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 13:55:32,354 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.8174, 5.8644, 5.8203, 5.8254, 5.7572, 5.8593, 5.7902, 5.8602], device='cuda:3'), covar=tensor([0.0114, 0.0087, 0.0163, 0.0129, 0.0502, 0.0103, 0.0151, 0.0089], device='cuda:3'), in_proj_covar=tensor([0.0010, 0.0011, 0.0011, 0.0011, 0.0011, 0.0010, 0.0010, 0.0011], device='cuda:3'), out_proj_covar=tensor([1.0670e-05, 1.1365e-05, 1.0784e-05, 1.0753e-05, 1.0571e-05, 1.0669e-05, 1.0663e-05, 1.0853e-05], device='cuda:3') 2023-03-08 13:55:34,711 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=568.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 13:55:51,316 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=99.92 vs. limit=5.0 2023-03-08 13:55:51,834 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=590.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 13:55:58,955 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=6.94 vs. limit=2.0 2023-03-08 13:56:00,909 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=600.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-08 13:56:01,187 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=11.41 vs. limit=2.0 2023-03-08 13:56:01,484 INFO [train.py:898] (3/4) Epoch 1, batch 600, loss[loss=0.9975, simple_loss=0.7672, pruned_loss=0.9327, over 18484.00 frames. ], tot_loss[loss=0.9984, simple_loss=0.8023, pruned_loss=0.9721, over 3418728.73 frames. ], batch size: 53, lr: 4.98e-02, grad_scale: 0.03125 2023-03-08 13:56:02,641 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=4.17 vs. limit=2.0 2023-03-08 13:56:05,377 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=13.95 vs. limit=2.0 2023-03-08 13:56:11,699 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.187e+02 4.468e+02 7.848e+02 1.243e+03 2.492e+05, threshold=1.570e+03, percent-clipped=35.0 2023-03-08 13:56:19,694 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=623.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-08 13:56:22,052 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=7.72 vs. limit=2.0 2023-03-08 13:56:24,188 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=629.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-08 13:56:27,531 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=48.92 vs. limit=5.0 2023-03-08 13:56:39,135 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=648.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 13:56:41,333 INFO [train.py:898] (3/4) Epoch 1, batch 650, loss[loss=0.945, simple_loss=0.7106, pruned_loss=0.8908, over 17747.00 frames. ], tot_loss[loss=0.9964, simple_loss=0.7921, pruned_loss=0.959, over 3467272.11 frames. ], batch size: 39, lr: 4.98e-02, grad_scale: 0.03125 2023-03-08 13:56:42,239 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=651.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-08 13:56:42,932 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=652.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-08 13:56:49,551 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=93.18 vs. limit=5.0 2023-03-08 13:57:20,444 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=9.01 vs. limit=2.0 2023-03-08 13:57:22,334 INFO [train.py:898] (3/4) Epoch 1, batch 700, loss[loss=0.9476, simple_loss=0.7204, pruned_loss=0.8509, over 18353.00 frames. ], tot_loss[loss=0.9958, simple_loss=0.784, pruned_loss=0.9464, over 3493908.80 frames. ], batch size: 46, lr: 4.98e-02, grad_scale: 0.0625 2023-03-08 13:57:33,543 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.059e+02 3.958e+02 5.670e+02 9.058e+02 3.205e+03, threshold=1.134e+03, percent-clipped=9.0 2023-03-08 13:57:43,049 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=10.32 vs. limit=2.0 2023-03-08 13:57:54,413 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=739.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-08 13:57:57,517 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=743.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-08 13:58:03,600 INFO [train.py:898] (3/4) Epoch 1, batch 750, loss[loss=0.9131, simple_loss=0.6868, pruned_loss=0.811, over 17211.00 frames. ], tot_loss[loss=0.9948, simple_loss=0.7758, pruned_loss=0.9328, over 3521628.04 frames. ], batch size: 38, lr: 4.97e-02, grad_scale: 0.0625 2023-03-08 13:58:28,262 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=5.14 vs. limit=2.0 2023-03-08 13:58:34,296 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=787.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 13:58:45,698 INFO [train.py:898] (3/4) Epoch 1, batch 800, loss[loss=0.9308, simple_loss=0.6944, pruned_loss=0.8155, over 18499.00 frames. ], tot_loss[loss=0.9946, simple_loss=0.7688, pruned_loss=0.9203, over 3523726.05 frames. ], batch size: 44, lr: 4.97e-02, grad_scale: 0.125 2023-03-08 13:58:55,624 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.968e+02 3.286e+02 5.870e+02 9.737e+02 2.138e+03, threshold=1.174e+03, percent-clipped=19.0 2023-03-08 13:59:23,881 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=847.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 13:59:26,827 INFO [train.py:898] (3/4) Epoch 1, batch 850, loss[loss=1.024, simple_loss=0.7582, pruned_loss=0.8852, over 18280.00 frames. ], tot_loss[loss=0.999, simple_loss=0.7654, pruned_loss=0.9116, over 3540451.72 frames. ], batch size: 49, lr: 4.96e-02, grad_scale: 0.125 2023-03-08 14:00:09,701 INFO [train.py:898] (3/4) Epoch 1, batch 900, loss[loss=0.9795, simple_loss=0.7144, pruned_loss=0.8429, over 18411.00 frames. ], tot_loss[loss=1.003, simple_loss=0.7627, pruned_loss=0.9021, over 3546439.25 frames. ], batch size: 48, lr: 4.96e-02, grad_scale: 0.25 2023-03-08 14:00:15,585 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=908.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-08 14:00:19,232 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.803e+02 2.708e+02 4.690e+02 7.118e+02 2.600e+03, threshold=9.379e+02, percent-clipped=5.0 2023-03-08 14:00:24,224 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=918.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-08 14:00:29,518 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=924.0, num_to_drop=2, layers_to_drop={0, 3} 2023-03-08 14:00:38,937 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=6.91 vs. limit=2.0 2023-03-08 14:00:47,938 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=946.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-08 14:00:51,735 INFO [train.py:898] (3/4) Epoch 1, batch 950, loss[loss=0.9374, simple_loss=0.687, pruned_loss=0.7836, over 18380.00 frames. ], tot_loss[loss=1.005, simple_loss=0.7578, pruned_loss=0.8902, over 3567534.01 frames. ], batch size: 42, lr: 4.96e-02, grad_scale: 0.25 2023-03-08 14:00:52,741 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=952.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-08 14:01:22,721 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.6936, 5.6472, 5.7299, 5.6814, 5.7021, 5.6713, 5.6245, 5.6076], device='cuda:3'), covar=tensor([0.0246, 0.0394, 0.0181, 0.0279, 0.0214, 0.0319, 0.0460, 0.0626], device='cuda:3'), in_proj_covar=tensor([0.0012, 0.0013, 0.0012, 0.0012, 0.0012, 0.0012, 0.0012, 0.0013], device='cuda:3'), out_proj_covar=tensor([1.2603e-05, 1.3241e-05, 1.2226e-05, 1.2478e-05, 1.2846e-05, 1.2768e-05, 1.2583e-05, 1.2436e-05], device='cuda:3') 2023-03-08 14:01:33,195 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=1000.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 14:01:33,773 INFO [train.py:898] (3/4) Epoch 1, batch 1000, loss[loss=0.9754, simple_loss=0.7226, pruned_loss=0.7879, over 18387.00 frames. ], tot_loss[loss=1.009, simple_loss=0.7566, pruned_loss=0.8798, over 3572900.33 frames. ], batch size: 50, lr: 4.95e-02, grad_scale: 0.5 2023-03-08 14:01:35,907 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=2.87 vs. limit=2.0 2023-03-08 14:01:43,698 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.541e+02 3.491e+02 4.812e+02 7.825e+02 1.437e+03, threshold=9.623e+02, percent-clipped=15.0 2023-03-08 14:01:44,226 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=20.42 vs. limit=5.0 2023-03-08 14:01:55,759 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=13.52 vs. limit=5.0 2023-03-08 14:01:58,199 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=14.15 vs. limit=5.0 2023-03-08 14:02:10,258 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=1043.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-08 14:02:17,218 INFO [train.py:898] (3/4) Epoch 1, batch 1050, loss[loss=0.99, simple_loss=0.7422, pruned_loss=0.7728, over 18480.00 frames. ], tot_loss[loss=1.007, simple_loss=0.7534, pruned_loss=0.8619, over 3589329.03 frames. ], batch size: 51, lr: 4.95e-02, grad_scale: 0.5 2023-03-08 14:02:52,454 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=1091.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 14:02:59,172 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=2.67 vs. limit=2.0 2023-03-08 14:03:01,211 INFO [train.py:898] (3/4) Epoch 1, batch 1100, loss[loss=0.9121, simple_loss=0.702, pruned_loss=0.6774, over 18497.00 frames. ], tot_loss[loss=1.002, simple_loss=0.7506, pruned_loss=0.8378, over 3603152.04 frames. ], batch size: 47, lr: 4.94e-02, grad_scale: 1.0 2023-03-08 14:03:10,798 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.010e+02 4.676e+02 7.290e+02 9.507e+02 1.781e+03, threshold=1.458e+03, percent-clipped=22.0 2023-03-08 14:03:44,980 INFO [train.py:898] (3/4) Epoch 1, batch 1150, loss[loss=0.8444, simple_loss=0.6685, pruned_loss=0.596, over 18425.00 frames. ], tot_loss[loss=0.9857, simple_loss=0.7437, pruned_loss=0.8027, over 3602569.40 frames. ], batch size: 48, lr: 4.94e-02, grad_scale: 1.0 2023-03-08 14:03:46,833 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1153.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 14:04:14,084 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=7.29 vs. limit=5.0 2023-03-08 14:04:20,343 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=8.58 vs. limit=5.0 2023-03-08 14:04:28,514 INFO [train.py:898] (3/4) Epoch 1, batch 1200, loss[loss=0.7331, simple_loss=0.591, pruned_loss=0.499, over 17725.00 frames. ], tot_loss[loss=0.9584, simple_loss=0.7299, pruned_loss=0.7593, over 3597426.21 frames. ], batch size: 39, lr: 4.93e-02, grad_scale: 2.0 2023-03-08 14:04:30,993 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=1203.0, num_to_drop=2, layers_to_drop={1, 3} 2023-03-08 14:04:38,785 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.547e+02 6.963e+02 9.525e+02 1.418e+03 3.358e+03, threshold=1.905e+03, percent-clipped=24.0 2023-03-08 14:04:39,956 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1214.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-08 14:04:43,117 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=1218.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-08 14:04:47,899 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=1224.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 14:05:06,887 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=1246.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 14:05:11,321 INFO [train.py:898] (3/4) Epoch 1, batch 1250, loss[loss=0.6957, simple_loss=0.5728, pruned_loss=0.4556, over 18476.00 frames. ], tot_loss[loss=0.9272, simple_loss=0.7149, pruned_loss=0.7129, over 3596851.28 frames. ], batch size: 43, lr: 4.92e-02, grad_scale: 2.0 2023-03-08 14:05:24,174 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=1266.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 14:05:28,927 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=1272.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:05:36,173 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=6.93 vs. limit=5.0 2023-03-08 14:05:48,370 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=1294.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:05:49,741 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=2.21 vs. limit=2.0 2023-03-08 14:05:53,955 INFO [train.py:898] (3/4) Epoch 1, batch 1300, loss[loss=0.7891, simple_loss=0.6535, pruned_loss=0.508, over 18304.00 frames. ], tot_loss[loss=0.8888, simple_loss=0.6944, pruned_loss=0.6632, over 3607356.88 frames. ], batch size: 54, lr: 4.92e-02, grad_scale: 2.0 2023-03-08 14:06:04,918 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 4.724e+02 7.521e+02 1.026e+03 1.273e+03 2.275e+03, threshold=2.053e+03, percent-clipped=5.0 2023-03-08 14:06:12,484 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.94 vs. limit=2.0 2023-03-08 14:06:38,445 INFO [train.py:898] (3/4) Epoch 1, batch 1350, loss[loss=0.6087, simple_loss=0.5221, pruned_loss=0.3714, over 18269.00 frames. ], tot_loss[loss=0.8523, simple_loss=0.6759, pruned_loss=0.6168, over 3599519.10 frames. ], batch size: 45, lr: 4.91e-02, grad_scale: 2.0 2023-03-08 14:06:56,585 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=2.84 vs. limit=2.0 2023-03-08 14:07:24,100 INFO [train.py:898] (3/4) Epoch 1, batch 1400, loss[loss=0.707, simple_loss=0.605, pruned_loss=0.4297, over 18618.00 frames. ], tot_loss[loss=0.8185, simple_loss=0.6592, pruned_loss=0.5744, over 3591434.42 frames. ], batch size: 52, lr: 4.91e-02, grad_scale: 2.0 2023-03-08 14:07:35,950 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 4.189e+02 7.494e+02 9.042e+02 1.119e+03 2.290e+03, threshold=1.808e+03, percent-clipped=1.0 2023-03-08 14:07:37,422 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.51 vs. limit=5.0 2023-03-08 14:07:55,210 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1434.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 14:08:09,506 INFO [train.py:898] (3/4) Epoch 1, batch 1450, loss[loss=0.7687, simple_loss=0.6524, pruned_loss=0.4687, over 16960.00 frames. ], tot_loss[loss=0.7847, simple_loss=0.6412, pruned_loss=0.5349, over 3585705.12 frames. ], batch size: 78, lr: 4.90e-02, grad_scale: 2.0 2023-03-08 14:08:10,007 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=5.07 vs. limit=5.0 2023-03-08 14:08:50,670 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1495.0, num_to_drop=2, layers_to_drop={0, 3} 2023-03-08 14:08:55,505 INFO [train.py:898] (3/4) Epoch 1, batch 1500, loss[loss=0.5565, simple_loss=0.5006, pruned_loss=0.3143, over 18250.00 frames. ], tot_loss[loss=0.7522, simple_loss=0.624, pruned_loss=0.4983, over 3588060.15 frames. ], batch size: 47, lr: 4.89e-02, grad_scale: 2.0 2023-03-08 14:08:57,710 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=1503.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 14:09:03,868 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=1509.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-08 14:09:07,632 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.367e+02 6.843e+02 9.252e+02 1.183e+03 2.667e+03, threshold=1.850e+03, percent-clipped=7.0 2023-03-08 14:09:16,085 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=2.22 vs. limit=2.0 2023-03-08 14:09:21,860 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.1814, 2.0215, 2.5330, 2.3638, 2.5971, 2.4177, 2.4369, 2.3041], device='cuda:3'), covar=tensor([0.7503, 0.7507, 0.5430, 0.6190, 0.6178, 0.6768, 0.6774, 0.7150], device='cuda:3'), in_proj_covar=tensor([0.0061, 0.0062, 0.0058, 0.0060, 0.0060, 0.0063, 0.0061, 0.0055], device='cuda:3'), out_proj_covar=tensor([5.5425e-05, 5.9153e-05, 5.4201e-05, 5.1992e-05, 5.4458e-05, 5.8154e-05, 5.5958e-05, 5.1567e-05], device='cuda:3') 2023-03-08 14:09:43,244 INFO [train.py:898] (3/4) Epoch 1, batch 1550, loss[loss=0.6493, simple_loss=0.5817, pruned_loss=0.3673, over 17971.00 frames. ], tot_loss[loss=0.7209, simple_loss=0.6073, pruned_loss=0.4644, over 3594870.47 frames. ], batch size: 65, lr: 4.89e-02, grad_scale: 2.0 2023-03-08 14:09:43,422 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=1551.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:09:58,509 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1566.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 14:10:16,014 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.2700, 2.0759, 2.4189, 2.2937, 2.4349, 2.4374, 2.4823, 2.3846], device='cuda:3'), covar=tensor([0.4535, 0.6227, 0.4302, 0.4584, 0.5228, 0.5412, 0.4836, 0.4288], device='cuda:3'), in_proj_covar=tensor([0.0058, 0.0062, 0.0056, 0.0058, 0.0058, 0.0063, 0.0058, 0.0053], device='cuda:3'), out_proj_covar=tensor([5.2471e-05, 5.7995e-05, 5.2564e-05, 4.9999e-05, 5.1385e-05, 5.7764e-05, 5.2260e-05, 4.8431e-05], device='cuda:3') 2023-03-08 14:10:26,142 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=5.18 vs. limit=5.0 2023-03-08 14:10:30,629 INFO [train.py:898] (3/4) Epoch 1, batch 1600, loss[loss=0.5786, simple_loss=0.512, pruned_loss=0.3309, over 18478.00 frames. ], tot_loss[loss=0.6923, simple_loss=0.5915, pruned_loss=0.4346, over 3589656.44 frames. ], batch size: 43, lr: 4.88e-02, grad_scale: 4.0 2023-03-08 14:10:32,800 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0962, 4.7737, 5.0303, 5.3486, 4.9418, 5.1171, 5.0459, 4.8859], device='cuda:3'), covar=tensor([0.1756, 0.2389, 0.1739, 0.1682, 0.1996, 0.1585, 0.1542, 0.2204], device='cuda:3'), in_proj_covar=tensor([0.0029, 0.0036, 0.0027, 0.0035, 0.0027, 0.0029, 0.0030, 0.0031], device='cuda:3'), out_proj_covar=tensor([1.7564e-05, 2.3922e-05, 1.5543e-05, 2.0318e-05, 1.4953e-05, 1.7204e-05, 1.8489e-05, 1.8796e-05], device='cuda:3') 2023-03-08 14:10:39,885 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1611.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 14:10:41,385 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 4.398e+02 7.144e+02 8.708e+02 1.119e+03 2.244e+03, threshold=1.742e+03, percent-clipped=2.0 2023-03-08 14:10:55,189 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1627.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-08 14:11:16,717 INFO [train.py:898] (3/4) Epoch 1, batch 1650, loss[loss=0.5873, simple_loss=0.5423, pruned_loss=0.3189, over 15948.00 frames. ], tot_loss[loss=0.6665, simple_loss=0.5774, pruned_loss=0.4083, over 3579510.26 frames. ], batch size: 94, lr: 4.87e-02, grad_scale: 4.0 2023-03-08 14:11:36,821 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1672.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-08 14:11:49,955 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1361, 3.2708, 3.2766, 3.2088, 2.9133, 3.5286, 3.2461, 3.1414], device='cuda:3'), covar=tensor([0.3309, 0.2589, 0.2894, 0.3054, 0.4605, 0.2340, 0.3627, 0.3564], device='cuda:3'), in_proj_covar=tensor([0.0074, 0.0061, 0.0066, 0.0064, 0.0065, 0.0065, 0.0065, 0.0067], device='cuda:3'), out_proj_covar=tensor([6.8357e-05, 5.8578e-05, 5.9533e-05, 5.9642e-05, 5.9013e-05, 5.9009e-05, 5.7613e-05, 5.9988e-05], device='cuda:3') 2023-03-08 14:12:04,569 INFO [train.py:898] (3/4) Epoch 1, batch 1700, loss[loss=0.5587, simple_loss=0.5256, pruned_loss=0.2965, over 18511.00 frames. ], tot_loss[loss=0.6389, simple_loss=0.5621, pruned_loss=0.3818, over 3585004.26 frames. ], batch size: 53, lr: 4.86e-02, grad_scale: 4.0 2023-03-08 14:12:11,863 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=5.19 vs. limit=5.0 2023-03-08 14:12:15,871 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.590e+02 6.591e+02 8.765e+02 1.033e+03 1.987e+03, threshold=1.753e+03, percent-clipped=3.0 2023-03-08 14:12:52,823 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0816, 4.7468, 4.1831, 4.4079, 4.3926, 4.6886, 4.2335, 4.2626], device='cuda:3'), covar=tensor([0.0882, 0.0505, 0.1394, 0.0953, 0.1080, 0.0623, 0.1451, 0.1073], device='cuda:3'), in_proj_covar=tensor([0.0037, 0.0032, 0.0040, 0.0034, 0.0041, 0.0033, 0.0039, 0.0039], device='cuda:3'), out_proj_covar=tensor([3.4709e-05, 3.0196e-05, 3.6954e-05, 3.2003e-05, 3.7307e-05, 3.2033e-05, 3.8089e-05, 3.4470e-05], device='cuda:3') 2023-03-08 14:12:53,411 INFO [train.py:898] (3/4) Epoch 1, batch 1750, loss[loss=0.5595, simple_loss=0.5288, pruned_loss=0.2953, over 17043.00 frames. ], tot_loss[loss=0.6154, simple_loss=0.5494, pruned_loss=0.3593, over 3583011.69 frames. ], batch size: 78, lr: 4.86e-02, grad_scale: 4.0 2023-03-08 14:13:26,336 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.6976, 5.4769, 5.2653, 5.5393, 5.5889, 5.3139, 5.3348, 5.1532], device='cuda:3'), covar=tensor([0.0566, 0.0574, 0.1110, 0.0914, 0.0785, 0.0876, 0.1026, 0.1000], device='cuda:3'), in_proj_covar=tensor([0.0079, 0.0086, 0.0095, 0.0082, 0.0088, 0.0109, 0.0105, 0.0105], device='cuda:3'), out_proj_covar=tensor([7.7438e-05, 8.2370e-05, 9.1867e-05, 8.0805e-05, 8.2113e-05, 1.0999e-04, 1.0285e-04, 1.0076e-04], device='cuda:3') 2023-03-08 14:13:30,715 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=1790.0, num_to_drop=1, layers_to_drop={3} 2023-03-08 14:13:41,364 INFO [train.py:898] (3/4) Epoch 1, batch 1800, loss[loss=0.4798, simple_loss=0.4652, pruned_loss=0.2461, over 18352.00 frames. ], tot_loss[loss=0.5977, simple_loss=0.5399, pruned_loss=0.3423, over 3585731.19 frames. ], batch size: 46, lr: 4.85e-02, grad_scale: 4.0 2023-03-08 14:13:48,921 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=1809.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 14:13:52,144 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 4.336e+02 7.100e+02 8.751e+02 1.042e+03 2.911e+03, threshold=1.750e+03, percent-clipped=4.0 2023-03-08 14:14:17,034 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1838.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 14:14:28,757 INFO [train.py:898] (3/4) Epoch 1, batch 1850, loss[loss=0.4984, simple_loss=0.4895, pruned_loss=0.2523, over 18505.00 frames. ], tot_loss[loss=0.5772, simple_loss=0.5285, pruned_loss=0.3241, over 3595084.29 frames. ], batch size: 53, lr: 4.84e-02, grad_scale: 4.0 2023-03-08 14:14:35,119 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=1857.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:14:35,287 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1857.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 14:14:47,436 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1870.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 14:14:49,411 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0856, 3.1669, 3.3664, 3.1660, 3.2805, 3.7876, 3.1009, 3.3678], device='cuda:3'), covar=tensor([0.3671, 0.2410, 0.1819, 0.1836, 0.2045, 0.0886, 0.3367, 0.1904], device='cuda:3'), in_proj_covar=tensor([0.0074, 0.0063, 0.0070, 0.0063, 0.0065, 0.0063, 0.0066, 0.0068], device='cuda:3'), out_proj_covar=tensor([6.8210e-05, 5.8600e-05, 6.2053e-05, 5.6583e-05, 5.8531e-05, 5.6412e-05, 5.8804e-05, 6.0524e-05], device='cuda:3') 2023-03-08 14:15:15,942 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1899.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-08 14:15:17,459 INFO [train.py:898] (3/4) Epoch 1, batch 1900, loss[loss=0.5159, simple_loss=0.5035, pruned_loss=0.2634, over 18498.00 frames. ], tot_loss[loss=0.5579, simple_loss=0.5177, pruned_loss=0.3075, over 3602213.68 frames. ], batch size: 53, lr: 4.83e-02, grad_scale: 4.0 2023-03-08 14:15:29,152 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.595e+02 6.983e+02 8.615e+02 1.111e+03 2.145e+03, threshold=1.723e+03, percent-clipped=1.0 2023-03-08 14:15:34,103 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1918.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-08 14:15:37,764 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=1922.0, num_to_drop=1, layers_to_drop={3} 2023-03-08 14:15:46,075 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1931.0, num_to_drop=2, layers_to_drop={0, 3} 2023-03-08 14:16:06,111 INFO [train.py:898] (3/4) Epoch 1, batch 1950, loss[loss=0.4915, simple_loss=0.4971, pruned_loss=0.2422, over 18290.00 frames. ], tot_loss[loss=0.5424, simple_loss=0.5092, pruned_loss=0.2943, over 3601034.04 frames. ], batch size: 54, lr: 4.83e-02, grad_scale: 4.0 2023-03-08 14:16:21,764 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=1967.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:16:46,752 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8246, 4.6530, 4.7392, 4.5180, 4.6771, 4.2896, 5.0109, 4.9994], device='cuda:3'), covar=tensor([0.0740, 0.1271, 0.0905, 0.0935, 0.0803, 0.1200, 0.0589, 0.0691], device='cuda:3'), in_proj_covar=tensor([0.0059, 0.0056, 0.0062, 0.0056, 0.0063, 0.0065, 0.0061, 0.0057], device='cuda:3'), out_proj_covar=tensor([4.8382e-05, 4.6953e-05, 5.2658e-05, 4.5118e-05, 5.4663e-05, 5.7567e-05, 5.1176e-05, 4.8005e-05], device='cuda:3') 2023-03-08 14:17:00,018 INFO [train.py:898] (3/4) Epoch 1, batch 2000, loss[loss=0.4786, simple_loss=0.4842, pruned_loss=0.2364, over 18287.00 frames. ], tot_loss[loss=0.5284, simple_loss=0.5017, pruned_loss=0.2825, over 3603467.68 frames. ], batch size: 54, lr: 4.82e-02, grad_scale: 8.0 2023-03-08 14:17:12,316 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.598e+02 6.794e+02 8.524e+02 1.041e+03 1.894e+03, threshold=1.705e+03, percent-clipped=5.0 2023-03-08 14:17:53,240 INFO [train.py:898] (3/4) Epoch 1, batch 2050, loss[loss=0.4279, simple_loss=0.4251, pruned_loss=0.2153, over 18409.00 frames. ], tot_loss[loss=0.51, simple_loss=0.4913, pruned_loss=0.2682, over 3602787.19 frames. ], batch size: 43, lr: 4.81e-02, grad_scale: 8.0 2023-03-08 14:18:32,412 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=2.09 vs. limit=2.0 2023-03-08 14:18:34,107 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2090.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-08 14:18:45,445 INFO [train.py:898] (3/4) Epoch 1, batch 2100, loss[loss=0.481, simple_loss=0.486, pruned_loss=0.238, over 18351.00 frames. ], tot_loss[loss=0.494, simple_loss=0.4829, pruned_loss=0.2556, over 3599043.32 frames. ], batch size: 56, lr: 4.80e-02, grad_scale: 8.0 2023-03-08 14:18:58,620 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.519e+02 5.789e+02 7.175e+02 9.165e+02 1.305e+03, threshold=1.435e+03, percent-clipped=0.0 2023-03-08 14:19:03,930 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=2.01 vs. limit=2.0 2023-03-08 14:19:22,469 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1868, 5.0965, 4.9277, 5.2924, 4.9838, 4.8857, 5.0516, 4.4889], device='cuda:3'), covar=tensor([0.0491, 0.0416, 0.0989, 0.0415, 0.0519, 0.0585, 0.0593, 0.0809], device='cuda:3'), in_proj_covar=tensor([0.0081, 0.0094, 0.0099, 0.0087, 0.0091, 0.0114, 0.0112, 0.0111], device='cuda:3'), out_proj_covar=tensor([7.8270e-05, 9.2677e-05, 1.0270e-04, 8.6649e-05, 8.5701e-05, 1.1962e-04, 1.1625e-04, 1.1102e-04], device='cuda:3') 2023-03-08 14:19:24,327 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=2138.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:19:38,138 INFO [train.py:898] (3/4) Epoch 1, batch 2150, loss[loss=0.3904, simple_loss=0.4181, pruned_loss=0.1814, over 18491.00 frames. ], tot_loss[loss=0.479, simple_loss=0.4742, pruned_loss=0.2442, over 3608359.96 frames. ], batch size: 47, lr: 4.79e-02, grad_scale: 8.0 2023-03-08 14:19:58,302 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=2.08 vs. limit=2.0 2023-03-08 14:20:23,719 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=2194.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:20:31,429 INFO [train.py:898] (3/4) Epoch 1, batch 2200, loss[loss=0.408, simple_loss=0.4383, pruned_loss=0.1888, over 18547.00 frames. ], tot_loss[loss=0.4662, simple_loss=0.4669, pruned_loss=0.2345, over 3604202.78 frames. ], batch size: 49, lr: 4.78e-02, grad_scale: 8.0 2023-03-08 14:20:43,087 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5982, 4.7385, 5.0918, 4.9254, 4.2531, 4.5894, 4.5735, 5.1688], device='cuda:3'), covar=tensor([0.0291, 0.0174, 0.0184, 0.0118, 0.0569, 0.0192, 0.0241, 0.0032], device='cuda:3'), in_proj_covar=tensor([0.0017, 0.0016, 0.0014, 0.0016, 0.0021, 0.0016, 0.0016, 0.0012], device='cuda:3'), out_proj_covar=tensor([1.7572e-05, 1.4472e-05, 1.2726e-05, 1.4037e-05, 2.2102e-05, 1.5737e-05, 1.5115e-05, 9.9255e-06], device='cuda:3') 2023-03-08 14:20:44,692 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 4.109e+02 6.450e+02 8.005e+02 9.321e+02 1.802e+03, threshold=1.601e+03, percent-clipped=2.0 2023-03-08 14:20:45,006 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=2213.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:20:54,202 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2222.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-08 14:20:58,228 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=2226.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:21:11,211 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.8584, 5.7231, 5.6471, 5.4825, 5.6801, 6.0840, 5.6598, 5.5781], device='cuda:3'), covar=tensor([0.0511, 0.0654, 0.0729, 0.0684, 0.0760, 0.0400, 0.0584, 0.1334], device='cuda:3'), in_proj_covar=tensor([0.0124, 0.0124, 0.0103, 0.0107, 0.0118, 0.0096, 0.0104, 0.0133], device='cuda:3'), out_proj_covar=tensor([1.1397e-04, 1.2616e-04, 1.0558e-04, 1.0099e-04, 1.1854e-04, 9.2849e-05, 9.8441e-05, 1.2775e-04], device='cuda:3') 2023-03-08 14:21:12,447 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5143, 4.4535, 3.9666, 4.4154, 4.6950, 4.8971, 3.9394, 3.8961], device='cuda:3'), covar=tensor([0.0751, 0.0540, 0.0476, 0.0749, 0.0271, 0.0278, 0.1019, 0.0641], device='cuda:3'), in_proj_covar=tensor([0.0027, 0.0023, 0.0026, 0.0024, 0.0026, 0.0023, 0.0027, 0.0027], device='cuda:3'), out_proj_covar=tensor([2.6691e-05, 2.1350e-05, 2.3185e-05, 2.2475e-05, 2.3818e-05, 2.0321e-05, 2.5260e-05, 2.5658e-05], device='cuda:3') 2023-03-08 14:21:24,255 INFO [train.py:898] (3/4) Epoch 1, batch 2250, loss[loss=0.4076, simple_loss=0.4377, pruned_loss=0.1887, over 18096.00 frames. ], tot_loss[loss=0.4548, simple_loss=0.4602, pruned_loss=0.2261, over 3598363.99 frames. ], batch size: 62, lr: 4.77e-02, grad_scale: 8.0 2023-03-08 14:21:42,801 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2267.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 14:21:45,783 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=2270.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:22:17,952 INFO [train.py:898] (3/4) Epoch 1, batch 2300, loss[loss=0.3746, simple_loss=0.4159, pruned_loss=0.1666, over 18356.00 frames. ], tot_loss[loss=0.4492, simple_loss=0.458, pruned_loss=0.2213, over 3592510.37 frames. ], batch size: 55, lr: 4.77e-02, grad_scale: 8.0 2023-03-08 14:22:31,078 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.655e+02 6.513e+02 7.644e+02 9.367e+02 1.783e+03, threshold=1.529e+03, percent-clipped=1.0 2023-03-08 14:22:33,981 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=2315.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:23:00,814 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=2340.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 14:23:10,297 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1184, 3.7690, 3.8906, 3.9601, 4.4078, 4.0875, 3.7373, 3.6644], device='cuda:3'), covar=tensor([0.0514, 0.0569, 0.0544, 0.0412, 0.0170, 0.0376, 0.0623, 0.0566], device='cuda:3'), in_proj_covar=tensor([0.0030, 0.0032, 0.0031, 0.0030, 0.0024, 0.0029, 0.0033, 0.0030], device='cuda:3'), out_proj_covar=tensor([2.5360e-05, 2.6433e-05, 2.7100e-05, 2.4813e-05, 1.8789e-05, 2.3545e-05, 2.7487e-05, 2.5954e-05], device='cuda:3') 2023-03-08 14:23:11,897 INFO [train.py:898] (3/4) Epoch 1, batch 2350, loss[loss=0.4557, simple_loss=0.4561, pruned_loss=0.2277, over 13139.00 frames. ], tot_loss[loss=0.4383, simple_loss=0.4513, pruned_loss=0.2135, over 3588128.04 frames. ], batch size: 130, lr: 4.76e-02, grad_scale: 16.0 2023-03-08 14:24:05,195 INFO [train.py:898] (3/4) Epoch 1, batch 2400, loss[loss=0.3932, simple_loss=0.4309, pruned_loss=0.1778, over 18614.00 frames. ], tot_loss[loss=0.4314, simple_loss=0.4475, pruned_loss=0.2083, over 3582641.43 frames. ], batch size: 52, lr: 4.75e-02, grad_scale: 16.0 2023-03-08 14:24:05,575 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=2401.0, num_to_drop=2, layers_to_drop={1, 3} 2023-03-08 14:24:17,834 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.759e+02 6.046e+02 7.526e+02 8.987e+02 1.408e+03, threshold=1.505e+03, percent-clipped=0.0 2023-03-08 14:24:27,145 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2675, 3.8666, 3.6343, 3.7328, 3.7780, 3.9544, 3.5796, 3.5996], device='cuda:3'), covar=tensor([0.0641, 0.0258, 0.0284, 0.0397, 0.0256, 0.0214, 0.0606, 0.0418], device='cuda:3'), in_proj_covar=tensor([0.0029, 0.0023, 0.0026, 0.0024, 0.0027, 0.0023, 0.0027, 0.0028], device='cuda:3'), out_proj_covar=tensor([3.0829e-05, 2.2996e-05, 2.4440e-05, 2.3688e-05, 2.5715e-05, 2.1799e-05, 2.7153e-05, 2.8039e-05], device='cuda:3') 2023-03-08 14:24:58,101 INFO [train.py:898] (3/4) Epoch 1, batch 2450, loss[loss=0.3938, simple_loss=0.43, pruned_loss=0.1788, over 18552.00 frames. ], tot_loss[loss=0.4252, simple_loss=0.4439, pruned_loss=0.2037, over 3575676.12 frames. ], batch size: 54, lr: 4.74e-02, grad_scale: 16.0 2023-03-08 14:25:30,910 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9261, 4.2611, 5.1939, 4.6395, 4.2127, 5.2090, 4.7903, 4.5306], device='cuda:3'), covar=tensor([0.0304, 0.0675, 0.0173, 0.0601, 0.1769, 0.0282, 0.0354, 0.0422], device='cuda:3'), in_proj_covar=tensor([0.0022, 0.0025, 0.0018, 0.0023, 0.0027, 0.0018, 0.0021, 0.0021], device='cuda:3'), out_proj_covar=tensor([1.1858e-05, 1.5736e-05, 8.2471e-06, 1.3843e-05, 1.7829e-05, 9.0953e-06, 1.0634e-05, 1.1366e-05], device='cuda:3') 2023-03-08 14:25:40,658 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=2.05 vs. limit=2.0 2023-03-08 14:25:45,573 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2494.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 14:25:53,140 INFO [train.py:898] (3/4) Epoch 1, batch 2500, loss[loss=0.4176, simple_loss=0.4445, pruned_loss=0.1953, over 18286.00 frames. ], tot_loss[loss=0.4174, simple_loss=0.439, pruned_loss=0.1983, over 3572776.88 frames. ], batch size: 49, lr: 4.73e-02, grad_scale: 16.0 2023-03-08 14:26:05,162 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.211e+02 5.891e+02 7.187e+02 8.781e+02 1.767e+03, threshold=1.437e+03, percent-clipped=2.0 2023-03-08 14:26:05,485 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2513.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:26:15,693 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5829, 3.0813, 3.1443, 3.6236, 3.6354, 3.6652, 3.8179, 3.3409], device='cuda:3'), covar=tensor([0.0525, 0.1016, 0.3316, 0.0673, 0.0466, 0.0458, 0.0316, 0.1482], device='cuda:3'), in_proj_covar=tensor([0.0065, 0.0064, 0.0060, 0.0056, 0.0060, 0.0066, 0.0061, 0.0052], device='cuda:3'), out_proj_covar=tensor([5.2605e-05, 5.4915e-05, 6.0104e-05, 4.5590e-05, 4.8955e-05, 5.1147e-05, 4.6563e-05, 5.5889e-05], device='cuda:3') 2023-03-08 14:26:20,369 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2526.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:26:23,195 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7245, 4.3276, 4.1397, 4.3289, 4.4783, 4.6936, 4.0355, 3.8073], device='cuda:3'), covar=tensor([0.0503, 0.0445, 0.0221, 0.0280, 0.0181, 0.0168, 0.0571, 0.0410], device='cuda:3'), in_proj_covar=tensor([0.0029, 0.0023, 0.0025, 0.0023, 0.0026, 0.0023, 0.0026, 0.0028], device='cuda:3'), out_proj_covar=tensor([3.2331e-05, 2.3541e-05, 2.4456e-05, 2.3619e-05, 2.5430e-05, 2.3016e-05, 2.6633e-05, 2.9310e-05], device='cuda:3') 2023-03-08 14:26:37,718 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=2542.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:26:47,805 INFO [train.py:898] (3/4) Epoch 1, batch 2550, loss[loss=0.4278, simple_loss=0.4359, pruned_loss=0.2098, over 12453.00 frames. ], tot_loss[loss=0.4122, simple_loss=0.4359, pruned_loss=0.1946, over 3577970.83 frames. ], batch size: 129, lr: 4.72e-02, grad_scale: 16.0 2023-03-08 14:26:50,295 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.3743, 5.3531, 5.3001, 4.9374, 4.8270, 4.9509, 4.8245, 4.9225], device='cuda:3'), covar=tensor([0.0573, 0.0184, 0.0187, 0.0248, 0.0376, 0.0253, 0.0365, 0.0322], device='cuda:3'), in_proj_covar=tensor([0.0062, 0.0069, 0.0057, 0.0060, 0.0070, 0.0069, 0.0072, 0.0066], device='cuda:3'), out_proj_covar=tensor([6.4529e-05, 6.8600e-05, 5.5897e-05, 6.6302e-05, 7.5565e-05, 7.2220e-05, 8.1885e-05, 6.6066e-05], device='cuda:3') 2023-03-08 14:26:58,310 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=2561.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:27:12,159 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=2574.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:27:42,177 INFO [train.py:898] (3/4) Epoch 1, batch 2600, loss[loss=0.4117, simple_loss=0.4496, pruned_loss=0.1869, over 18567.00 frames. ], tot_loss[loss=0.4033, simple_loss=0.4293, pruned_loss=0.1889, over 3579658.25 frames. ], batch size: 54, lr: 4.71e-02, grad_scale: 16.0 2023-03-08 14:27:55,522 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.922e+02 6.237e+02 7.424e+02 9.705e+02 2.435e+03, threshold=1.485e+03, percent-clipped=1.0 2023-03-08 14:28:06,351 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.88 vs. limit=2.0 2023-03-08 14:28:36,989 INFO [train.py:898] (3/4) Epoch 1, batch 2650, loss[loss=0.3423, simple_loss=0.3895, pruned_loss=0.1475, over 18259.00 frames. ], tot_loss[loss=0.3991, simple_loss=0.427, pruned_loss=0.1857, over 3561997.13 frames. ], batch size: 47, lr: 4.70e-02, grad_scale: 16.0 2023-03-08 14:29:15,537 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=2685.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 14:29:27,253 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=2696.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 14:29:32,058 INFO [train.py:898] (3/4) Epoch 1, batch 2700, loss[loss=0.435, simple_loss=0.4407, pruned_loss=0.2147, over 18448.00 frames. ], tot_loss[loss=0.3935, simple_loss=0.4233, pruned_loss=0.182, over 3567575.54 frames. ], batch size: 43, lr: 4.69e-02, grad_scale: 16.0 2023-03-08 14:29:45,134 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.900e+02 5.817e+02 6.597e+02 8.414e+02 1.857e+03, threshold=1.319e+03, percent-clipped=1.0 2023-03-08 14:30:19,349 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6507, 4.5822, 4.2427, 4.4703, 4.7686, 4.9267, 4.2934, 4.0826], device='cuda:3'), covar=tensor([0.0713, 0.0392, 0.0227, 0.0279, 0.0142, 0.0162, 0.0561, 0.0377], device='cuda:3'), in_proj_covar=tensor([0.0033, 0.0023, 0.0026, 0.0023, 0.0026, 0.0024, 0.0027, 0.0028], device='cuda:3'), out_proj_covar=tensor([4.0291e-05, 2.6531e-05, 2.7430e-05, 2.6733e-05, 2.7035e-05, 2.5348e-05, 3.1855e-05, 3.3025e-05], device='cuda:3') 2023-03-08 14:30:22,530 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=2746.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-08 14:30:27,316 INFO [train.py:898] (3/4) Epoch 1, batch 2750, loss[loss=0.3331, simple_loss=0.3689, pruned_loss=0.1487, over 18446.00 frames. ], tot_loss[loss=0.3899, simple_loss=0.4213, pruned_loss=0.1794, over 3564048.66 frames. ], batch size: 43, lr: 4.68e-02, grad_scale: 16.0 2023-03-08 14:31:08,969 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.99 vs. limit=2.0 2023-03-08 14:31:22,077 INFO [train.py:898] (3/4) Epoch 1, batch 2800, loss[loss=0.4084, simple_loss=0.4382, pruned_loss=0.1893, over 18253.00 frames. ], tot_loss[loss=0.3908, simple_loss=0.4224, pruned_loss=0.1797, over 3569397.59 frames. ], batch size: 60, lr: 4.67e-02, grad_scale: 16.0 2023-03-08 14:31:23,330 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.7052, 5.4584, 5.4045, 5.4807, 5.5483, 5.9748, 5.3919, 5.3995], device='cuda:3'), covar=tensor([0.0450, 0.0625, 0.0564, 0.0610, 0.0726, 0.0497, 0.0583, 0.1226], device='cuda:3'), in_proj_covar=tensor([0.0129, 0.0140, 0.0112, 0.0113, 0.0137, 0.0126, 0.0111, 0.0146], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0001, 0.0001, 0.0001, 0.0001, 0.0001, 0.0001, 0.0002], device='cuda:3') 2023-03-08 14:31:35,363 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.828e+02 5.568e+02 7.020e+02 9.459e+02 2.422e+03, threshold=1.404e+03, percent-clipped=9.0 2023-03-08 14:31:40,062 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0952, 5.0007, 4.7116, 5.1136, 5.0134, 4.6787, 4.9015, 4.4371], device='cuda:3'), covar=tensor([0.0302, 0.0260, 0.0956, 0.0455, 0.0227, 0.0389, 0.0384, 0.0410], device='cuda:3'), in_proj_covar=tensor([0.0095, 0.0104, 0.0122, 0.0101, 0.0097, 0.0120, 0.0118, 0.0121], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0001, 0.0001, 0.0001, 0.0001, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-08 14:31:47,513 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=2824.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 14:32:09,955 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=2.19 vs. limit=2.0 2023-03-08 14:32:16,542 INFO [train.py:898] (3/4) Epoch 1, batch 2850, loss[loss=0.374, simple_loss=0.4133, pruned_loss=0.1673, over 18305.00 frames. ], tot_loss[loss=0.3874, simple_loss=0.4204, pruned_loss=0.1772, over 3574314.22 frames. ], batch size: 54, lr: 4.66e-02, grad_scale: 16.0 2023-03-08 14:32:44,578 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7046, 3.2107, 3.1175, 3.6595, 2.9740, 3.1933, 3.8197, 3.5705], device='cuda:3'), covar=tensor([0.0297, 0.0644, 0.3499, 0.0299, 0.1182, 0.0495, 0.0169, 0.1318], device='cuda:3'), in_proj_covar=tensor([0.0068, 0.0065, 0.0066, 0.0054, 0.0066, 0.0070, 0.0062, 0.0051], device='cuda:3'), out_proj_covar=tensor([5.5339e-05, 5.6651e-05, 6.8163e-05, 4.7438e-05, 5.8341e-05, 5.7614e-05, 4.9795e-05, 5.5091e-05], device='cuda:3') 2023-03-08 14:32:53,622 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=2885.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-08 14:33:10,288 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0403, 5.1265, 3.8583, 5.1137, 5.1736, 4.5947, 4.7402, 4.2385], device='cuda:3'), covar=tensor([0.0057, 0.0113, 0.0835, 0.0042, 0.0047, 0.0258, 0.0144, 0.0302], device='cuda:3'), in_proj_covar=tensor([0.0024, 0.0021, 0.0040, 0.0020, 0.0019, 0.0032, 0.0030, 0.0027], device='cuda:3'), out_proj_covar=tensor([1.7433e-05, 1.6443e-05, 3.5770e-05, 1.4487e-05, 1.4556e-05, 2.8467e-05, 2.6184e-05, 2.3287e-05], device='cuda:3') 2023-03-08 14:33:10,888 INFO [train.py:898] (3/4) Epoch 1, batch 2900, loss[loss=0.2961, simple_loss=0.3459, pruned_loss=0.1231, over 18270.00 frames. ], tot_loss[loss=0.383, simple_loss=0.4173, pruned_loss=0.1744, over 3572972.17 frames. ], batch size: 47, lr: 4.65e-02, grad_scale: 16.0 2023-03-08 14:33:23,642 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.949e+02 5.658e+02 6.946e+02 8.980e+02 2.248e+03, threshold=1.389e+03, percent-clipped=2.0 2023-03-08 14:33:29,875 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.3167, 3.4984, 2.9828, 3.6630, 3.5678, 3.7840, 3.1481, 3.3766], device='cuda:3'), covar=tensor([0.0140, 0.0475, 0.0709, 0.0386, 0.0298, 0.0306, 0.0551, 0.0275], device='cuda:3'), in_proj_covar=tensor([0.0019, 0.0026, 0.0025, 0.0020, 0.0019, 0.0020, 0.0024, 0.0020], device='cuda:3'), out_proj_covar=tensor([1.5577e-05, 2.3004e-05, 2.2943e-05, 1.7594e-05, 1.4770e-05, 1.5981e-05, 2.1392e-05, 1.7238e-05], device='cuda:3') 2023-03-08 14:33:50,337 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=2937.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 14:34:02,012 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.93 vs. limit=2.0 2023-03-08 14:34:05,708 INFO [train.py:898] (3/4) Epoch 1, batch 2950, loss[loss=0.3646, simple_loss=0.4043, pruned_loss=0.1624, over 18557.00 frames. ], tot_loss[loss=0.3762, simple_loss=0.4126, pruned_loss=0.17, over 3581065.73 frames. ], batch size: 49, lr: 4.64e-02, grad_scale: 16.0 2023-03-08 14:34:55,006 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2996.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 14:34:57,607 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=2998.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-08 14:35:00,658 INFO [train.py:898] (3/4) Epoch 1, batch 3000, loss[loss=0.443, simple_loss=0.4498, pruned_loss=0.2181, over 12831.00 frames. ], tot_loss[loss=0.3746, simple_loss=0.4116, pruned_loss=0.1689, over 3568410.77 frames. ], batch size: 129, lr: 4.63e-02, grad_scale: 8.0 2023-03-08 14:35:00,658 INFO [train.py:923] (3/4) Computing validation loss 2023-03-08 14:35:12,697 INFO [train.py:932] (3/4) Epoch 1, validation: loss=0.2954, simple_loss=0.387, pruned_loss=0.102, over 944034.00 frames. 2023-03-08 14:35:12,698 INFO [train.py:933] (3/4) Maximum memory allocated so far is 17654MB 2023-03-08 14:35:26,916 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.611e+02 5.968e+02 7.272e+02 9.035e+02 2.166e+03, threshold=1.454e+03, percent-clipped=4.0 2023-03-08 14:35:32,487 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=3019.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 14:35:57,694 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=3041.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:36:00,949 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=3044.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 14:36:01,162 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5533, 3.4596, 3.4155, 3.6694, 3.4035, 3.2851, 3.5671, 3.8953], device='cuda:3'), covar=tensor([0.0767, 0.0955, 0.0309, 0.0198, 0.0917, 0.1258, 0.0328, 0.0158], device='cuda:3'), in_proj_covar=tensor([0.0045, 0.0048, 0.0054, 0.0049, 0.0049, 0.0046, 0.0041, 0.0047], device='cuda:3'), out_proj_covar=tensor([5.0401e-05, 5.4863e-05, 4.8723e-05, 4.1693e-05, 4.7383e-05, 4.6895e-05, 3.6903e-05, 3.8117e-05], device='cuda:3') 2023-03-08 14:36:08,340 INFO [train.py:898] (3/4) Epoch 1, batch 3050, loss[loss=0.3506, simple_loss=0.3908, pruned_loss=0.1552, over 18293.00 frames. ], tot_loss[loss=0.3717, simple_loss=0.4094, pruned_loss=0.167, over 3569547.67 frames. ], batch size: 49, lr: 4.62e-02, grad_scale: 8.0 2023-03-08 14:36:41,139 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=3080.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-08 14:37:04,042 INFO [train.py:898] (3/4) Epoch 1, batch 3100, loss[loss=0.4102, simple_loss=0.4332, pruned_loss=0.1936, over 18481.00 frames. ], tot_loss[loss=0.367, simple_loss=0.4064, pruned_loss=0.1638, over 3586453.02 frames. ], batch size: 51, lr: 4.61e-02, grad_scale: 8.0 2023-03-08 14:37:18,600 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.360e+02 5.789e+02 7.194e+02 8.721e+02 2.161e+03, threshold=1.439e+03, percent-clipped=3.0 2023-03-08 14:37:59,561 INFO [train.py:898] (3/4) Epoch 1, batch 3150, loss[loss=0.3863, simple_loss=0.4191, pruned_loss=0.1768, over 18466.00 frames. ], tot_loss[loss=0.3647, simple_loss=0.405, pruned_loss=0.1622, over 3595845.82 frames. ], batch size: 59, lr: 4.60e-02, grad_scale: 8.0 2023-03-08 14:38:05,542 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=2.04 vs. limit=2.0 2023-03-08 14:38:31,221 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=3180.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:38:54,772 INFO [train.py:898] (3/4) Epoch 1, batch 3200, loss[loss=0.3904, simple_loss=0.4244, pruned_loss=0.1782, over 18346.00 frames. ], tot_loss[loss=0.3634, simple_loss=0.4039, pruned_loss=0.1615, over 3595972.14 frames. ], batch size: 56, lr: 4.59e-02, grad_scale: 8.0 2023-03-08 14:39:08,063 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.309e+02 6.227e+02 7.762e+02 9.563e+02 2.131e+03, threshold=1.552e+03, percent-clipped=3.0 2023-03-08 14:39:49,491 INFO [train.py:898] (3/4) Epoch 1, batch 3250, loss[loss=0.3882, simple_loss=0.4312, pruned_loss=0.1726, over 18487.00 frames. ], tot_loss[loss=0.3631, simple_loss=0.4036, pruned_loss=0.1613, over 3590292.12 frames. ], batch size: 53, lr: 4.58e-02, grad_scale: 8.0 2023-03-08 14:39:58,324 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6782, 3.0364, 3.0473, 3.3556, 2.8552, 3.4127, 3.4768, 3.1454], device='cuda:3'), covar=tensor([0.0268, 0.1042, 0.1326, 0.0286, 0.1187, 0.0611, 0.0238, 0.1539], device='cuda:3'), in_proj_covar=tensor([0.0058, 0.0053, 0.0047, 0.0048, 0.0053, 0.0060, 0.0054, 0.0040], device='cuda:3'), out_proj_covar=tensor([4.8638e-05, 4.7158e-05, 5.1681e-05, 4.0382e-05, 4.8701e-05, 5.1423e-05, 4.3492e-05, 4.5872e-05], device='cuda:3') 2023-03-08 14:40:30,601 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=3288.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 14:40:35,763 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=3293.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:40:45,134 INFO [train.py:898] (3/4) Epoch 1, batch 3300, loss[loss=0.3363, simple_loss=0.3947, pruned_loss=0.1389, over 18348.00 frames. ], tot_loss[loss=0.3591, simple_loss=0.4003, pruned_loss=0.159, over 3591629.79 frames. ], batch size: 55, lr: 4.57e-02, grad_scale: 8.0 2023-03-08 14:40:49,448 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3688, 5.6713, 5.6308, 5.3732, 5.1473, 5.6381, 5.7548, 5.5288], device='cuda:3'), covar=tensor([0.0597, 0.0576, 0.0254, 0.0379, 0.0694, 0.0303, 0.0367, 0.0499], device='cuda:3'), in_proj_covar=tensor([0.0168, 0.0158, 0.0120, 0.0142, 0.0162, 0.0125, 0.0136, 0.0135], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 14:40:50,592 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4983, 4.3077, 4.2731, 3.4625, 3.6432, 3.2352, 3.9308, 3.5677], device='cuda:3'), covar=tensor([0.0392, 0.0376, 0.0239, 0.0341, 0.0161, 0.0569, 0.0175, 0.0279], device='cuda:3'), in_proj_covar=tensor([0.0030, 0.0032, 0.0028, 0.0027, 0.0032, 0.0030, 0.0030, 0.0033], device='cuda:3'), out_proj_covar=tensor([4.4771e-05, 5.6304e-05, 4.0987e-05, 3.9524e-05, 4.2296e-05, 4.3647e-05, 4.0021e-05, 4.8368e-05], device='cuda:3') 2023-03-08 14:40:58,192 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7648, 2.6512, 1.2471, 3.0794, 2.9548, 3.7814, 1.6928, 2.8861], device='cuda:3'), covar=tensor([0.0159, 0.1061, 0.1929, 0.0508, 0.0519, 0.0198, 0.1243, 0.0404], device='cuda:3'), in_proj_covar=tensor([0.0023, 0.0037, 0.0037, 0.0023, 0.0025, 0.0024, 0.0033, 0.0027], device='cuda:3'), out_proj_covar=tensor([1.9208e-05, 3.6267e-05, 3.6778e-05, 2.2283e-05, 2.0838e-05, 1.9893e-05, 3.0251e-05, 2.3781e-05], device='cuda:3') 2023-03-08 14:40:58,791 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 4.218e+02 5.885e+02 6.635e+02 8.607e+02 2.408e+03, threshold=1.327e+03, percent-clipped=3.0 2023-03-08 14:41:01,462 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0186, 4.6698, 4.6472, 4.0451, 4.6278, 3.8643, 4.2310, 4.1248], device='cuda:3'), covar=tensor([0.0537, 0.0219, 0.0186, 0.0255, 0.0111, 0.0540, 0.0380, 0.0423], device='cuda:3'), in_proj_covar=tensor([0.0025, 0.0022, 0.0019, 0.0021, 0.0014, 0.0022, 0.0017, 0.0021], device='cuda:3'), out_proj_covar=tensor([1.5579e-05, 1.3485e-05, 1.0271e-05, 1.1888e-05, 7.5864e-06, 1.3694e-05, 9.6859e-06, 1.2943e-05], device='cuda:3') 2023-03-08 14:41:29,217 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=3341.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:41:38,142 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=3349.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-08 14:41:40,512 INFO [train.py:898] (3/4) Epoch 1, batch 3350, loss[loss=0.3092, simple_loss=0.3583, pruned_loss=0.13, over 17581.00 frames. ], tot_loss[loss=0.3561, simple_loss=0.3987, pruned_loss=0.1567, over 3602765.51 frames. ], batch size: 39, lr: 4.56e-02, grad_scale: 8.0 2023-03-08 14:42:06,614 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=3375.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:42:22,286 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=3389.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:42:36,012 INFO [train.py:898] (3/4) Epoch 1, batch 3400, loss[loss=0.3632, simple_loss=0.4092, pruned_loss=0.1586, over 16287.00 frames. ], tot_loss[loss=0.3547, simple_loss=0.3982, pruned_loss=0.1556, over 3603660.48 frames. ], batch size: 94, lr: 4.55e-02, grad_scale: 8.0 2023-03-08 14:42:50,975 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.931e+02 5.703e+02 6.610e+02 8.336e+02 1.336e+03, threshold=1.322e+03, percent-clipped=1.0 2023-03-08 14:43:27,894 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.91 vs. limit=2.0 2023-03-08 14:43:31,443 INFO [train.py:898] (3/4) Epoch 1, batch 3450, loss[loss=0.2959, simple_loss=0.3563, pruned_loss=0.1178, over 18271.00 frames. ], tot_loss[loss=0.3547, simple_loss=0.3986, pruned_loss=0.1555, over 3588738.41 frames. ], batch size: 47, lr: 4.54e-02, grad_scale: 8.0 2023-03-08 14:43:48,833 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4540, 4.6603, 4.2618, 4.3086, 4.2935, 4.6240, 4.3858, 4.2917], device='cuda:3'), covar=tensor([0.0488, 0.0660, 0.0633, 0.0547, 0.0932, 0.0812, 0.0576, 0.1265], device='cuda:3'), in_proj_covar=tensor([0.0131, 0.0130, 0.0113, 0.0107, 0.0150, 0.0144, 0.0109, 0.0157], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0001, 0.0001, 0.0001, 0.0002, 0.0002, 0.0001, 0.0002], device='cuda:3') 2023-03-08 14:43:51,288 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3095, 3.0117, 2.6175, 2.9675, 3.0869, 3.0704, 2.7851, 3.1504], device='cuda:3'), covar=tensor([0.0703, 0.0910, 0.0557, 0.0407, 0.1062, 0.0886, 0.0469, 0.0201], device='cuda:3'), in_proj_covar=tensor([0.0039, 0.0039, 0.0058, 0.0049, 0.0049, 0.0041, 0.0040, 0.0046], device='cuda:3'), out_proj_covar=tensor([4.5693e-05, 5.0036e-05, 5.6614e-05, 4.5330e-05, 5.4328e-05, 4.5622e-05, 4.0280e-05, 4.0623e-05], device='cuda:3') 2023-03-08 14:44:04,112 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=3480.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:44:27,331 INFO [train.py:898] (3/4) Epoch 1, batch 3500, loss[loss=0.4066, simple_loss=0.442, pruned_loss=0.1856, over 16043.00 frames. ], tot_loss[loss=0.3529, simple_loss=0.3972, pruned_loss=0.1543, over 3574988.00 frames. ], batch size: 94, lr: 4.53e-02, grad_scale: 8.0 2023-03-08 14:44:32,573 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.6973, 5.2238, 5.3814, 5.3887, 5.5130, 6.0978, 5.6582, 5.4571], device='cuda:3'), covar=tensor([0.0462, 0.0586, 0.0618, 0.0612, 0.1025, 0.0480, 0.0566, 0.1493], device='cuda:3'), in_proj_covar=tensor([0.0141, 0.0142, 0.0122, 0.0114, 0.0160, 0.0153, 0.0116, 0.0168], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0001, 0.0002], device='cuda:3') 2023-03-08 14:44:42,686 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 4.394e+02 6.336e+02 7.785e+02 9.331e+02 2.014e+03, threshold=1.557e+03, percent-clipped=6.0 2023-03-08 14:44:56,050 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4509, 5.3120, 5.2372, 5.0570, 5.4581, 5.8601, 5.3515, 5.3255], device='cuda:3'), covar=tensor([0.0601, 0.0779, 0.0783, 0.0750, 0.1150, 0.0745, 0.0667, 0.1873], device='cuda:3'), in_proj_covar=tensor([0.0142, 0.0142, 0.0123, 0.0115, 0.0161, 0.0152, 0.0117, 0.0170], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0001, 0.0002], device='cuda:3') 2023-03-08 14:44:58,031 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=3528.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:45:16,077 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=3546.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 14:45:20,678 INFO [train.py:898] (3/4) Epoch 1, batch 3550, loss[loss=0.2988, simple_loss=0.3492, pruned_loss=0.1242, over 18267.00 frames. ], tot_loss[loss=0.3537, simple_loss=0.3975, pruned_loss=0.155, over 3578212.65 frames. ], batch size: 45, lr: 4.51e-02, grad_scale: 8.0 2023-03-08 14:45:30,495 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9755, 4.3119, 4.0048, 4.0708, 3.3204, 4.2946, 4.5584, 3.6052], device='cuda:3'), covar=tensor([0.0101, 0.0079, 0.0088, 0.0092, 0.0374, 0.0071, 0.0042, 0.0252], device='cuda:3'), in_proj_covar=tensor([0.0025, 0.0022, 0.0024, 0.0024, 0.0037, 0.0022, 0.0021, 0.0033], device='cuda:3'), out_proj_covar=tensor([2.1882e-05, 1.7301e-05, 1.9502e-05, 2.0960e-05, 3.0051e-05, 1.6633e-05, 1.6394e-05, 2.7316e-05], device='cuda:3') 2023-03-08 14:45:49,966 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5829, 2.4693, 1.3284, 3.2018, 2.5986, 3.4919, 1.7213, 3.0896], device='cuda:3'), covar=tensor([0.0177, 0.1168, 0.2174, 0.0373, 0.0648, 0.0301, 0.1325, 0.0369], device='cuda:3'), in_proj_covar=tensor([0.0029, 0.0047, 0.0048, 0.0027, 0.0034, 0.0030, 0.0043, 0.0034], device='cuda:3'), out_proj_covar=tensor([2.3032e-05, 4.7956e-05, 5.0170e-05, 2.6678e-05, 2.8198e-05, 2.4841e-05, 4.1188e-05, 2.9302e-05], device='cuda:3') 2023-03-08 14:46:04,604 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=3593.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:46:12,800 INFO [train.py:898] (3/4) Epoch 1, batch 3600, loss[loss=0.3086, simple_loss=0.3639, pruned_loss=0.1266, over 18501.00 frames. ], tot_loss[loss=0.3518, simple_loss=0.3964, pruned_loss=0.1536, over 3569335.49 frames. ], batch size: 47, lr: 4.50e-02, grad_scale: 8.0 2023-03-08 14:46:19,777 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=3607.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-08 14:46:26,691 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.777e+02 7.280e+02 9.972e+02 1.228e+03 2.916e+03, threshold=1.994e+03, percent-clipped=11.0 2023-03-08 14:47:17,205 INFO [train.py:898] (3/4) Epoch 2, batch 0, loss[loss=0.3122, simple_loss=0.3608, pruned_loss=0.1317, over 18554.00 frames. ], tot_loss[loss=0.3122, simple_loss=0.3608, pruned_loss=0.1317, over 18554.00 frames. ], batch size: 45, lr: 4.41e-02, grad_scale: 8.0 2023-03-08 14:47:17,205 INFO [train.py:923] (3/4) Computing validation loss 2023-03-08 14:47:29,036 INFO [train.py:932] (3/4) Epoch 2, validation: loss=0.2643, simple_loss=0.3556, pruned_loss=0.08646, over 944034.00 frames. 2023-03-08 14:47:29,037 INFO [train.py:933] (3/4) Maximum memory allocated so far is 18343MB 2023-03-08 14:47:35,866 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=3641.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:47:39,162 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=3644.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 14:48:16,085 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=3675.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:48:17,235 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=3676.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 14:48:26,900 INFO [train.py:898] (3/4) Epoch 2, batch 50, loss[loss=0.3492, simple_loss=0.3964, pruned_loss=0.1509, over 18411.00 frames. ], tot_loss[loss=0.3366, simple_loss=0.3855, pruned_loss=0.1438, over 812338.37 frames. ], batch size: 48, lr: 4.40e-02, grad_scale: 8.0 2023-03-08 14:49:00,565 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 4.344e+02 6.253e+02 7.943e+02 9.806e+02 1.695e+03, threshold=1.589e+03, percent-clipped=0.0 2023-03-08 14:49:11,434 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=3723.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:49:25,277 INFO [train.py:898] (3/4) Epoch 2, batch 100, loss[loss=0.3433, simple_loss=0.3933, pruned_loss=0.1466, over 17928.00 frames. ], tot_loss[loss=0.3352, simple_loss=0.3848, pruned_loss=0.1428, over 1430404.10 frames. ], batch size: 65, lr: 4.39e-02, grad_scale: 8.0 2023-03-08 14:49:27,906 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=3737.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-08 14:49:37,719 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.6112, 6.0354, 5.8787, 5.7521, 5.5015, 6.0250, 6.0619, 5.9447], device='cuda:3'), covar=tensor([0.0538, 0.0520, 0.0208, 0.0419, 0.0832, 0.0255, 0.0374, 0.0438], device='cuda:3'), in_proj_covar=tensor([0.0189, 0.0182, 0.0135, 0.0167, 0.0200, 0.0148, 0.0161, 0.0152], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 14:49:39,207 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.81 vs. limit=2.0 2023-03-08 14:50:05,045 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6118, 4.6678, 3.8918, 4.2071, 4.4549, 4.7432, 4.2306, 3.6197], device='cuda:3'), covar=tensor([0.0332, 0.0108, 0.0146, 0.0111, 0.0092, 0.0094, 0.0232, 0.0273], device='cuda:3'), in_proj_covar=tensor([0.0032, 0.0021, 0.0025, 0.0021, 0.0024, 0.0022, 0.0026, 0.0027], device='cuda:3'), out_proj_covar=tensor([5.5371e-05, 3.4791e-05, 4.1778e-05, 3.6985e-05, 3.5767e-05, 3.3825e-05, 4.1434e-05, 4.5002e-05], device='cuda:3') 2023-03-08 14:50:22,839 INFO [train.py:898] (3/4) Epoch 2, batch 150, loss[loss=0.2779, simple_loss=0.3285, pruned_loss=0.1136, over 17682.00 frames. ], tot_loss[loss=0.3367, simple_loss=0.386, pruned_loss=0.1437, over 1911881.68 frames. ], batch size: 39, lr: 4.38e-02, grad_scale: 8.0 2023-03-08 14:50:27,036 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.79 vs. limit=2.0 2023-03-08 14:50:55,227 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.556e+02 6.308e+02 7.748e+02 1.009e+03 1.866e+03, threshold=1.550e+03, percent-clipped=3.0 2023-03-08 14:51:21,236 INFO [train.py:898] (3/4) Epoch 2, batch 200, loss[loss=0.3622, simple_loss=0.4194, pruned_loss=0.1525, over 17963.00 frames. ], tot_loss[loss=0.3365, simple_loss=0.3856, pruned_loss=0.1437, over 2295074.93 frames. ], batch size: 65, lr: 4.37e-02, grad_scale: 8.0 2023-03-08 14:52:07,114 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9198, 4.7023, 4.8887, 4.7972, 4.5918, 4.4973, 5.1778, 5.0057], device='cuda:3'), covar=tensor([0.0114, 0.0221, 0.0218, 0.0148, 0.0189, 0.0202, 0.0161, 0.0165], device='cuda:3'), in_proj_covar=tensor([0.0058, 0.0053, 0.0050, 0.0054, 0.0055, 0.0062, 0.0052, 0.0050], device='cuda:3'), out_proj_covar=tensor([9.1311e-05, 7.2938e-05, 6.9843e-05, 6.9887e-05, 8.5857e-05, 9.8954e-05, 7.3370e-05, 6.5231e-05], device='cuda:3') 2023-03-08 14:52:20,104 INFO [train.py:898] (3/4) Epoch 2, batch 250, loss[loss=0.421, simple_loss=0.4351, pruned_loss=0.2034, over 12557.00 frames. ], tot_loss[loss=0.3353, simple_loss=0.384, pruned_loss=0.1433, over 2580254.10 frames. ], batch size: 130, lr: 4.36e-02, grad_scale: 8.0 2023-03-08 14:52:25,467 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8311, 4.6879, 4.0134, 4.8243, 4.6695, 4.2543, 4.6642, 4.0743], device='cuda:3'), covar=tensor([0.0261, 0.0337, 0.1535, 0.0383, 0.0236, 0.0451, 0.0268, 0.0555], device='cuda:3'), in_proj_covar=tensor([0.0134, 0.0138, 0.0201, 0.0128, 0.0124, 0.0155, 0.0149, 0.0162], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 14:52:39,891 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=3902.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 14:52:53,094 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.269e+02 6.436e+02 8.265e+02 1.082e+03 2.310e+03, threshold=1.653e+03, percent-clipped=6.0 2023-03-08 14:53:08,291 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=3926.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:53:18,696 INFO [train.py:898] (3/4) Epoch 2, batch 300, loss[loss=0.3538, simple_loss=0.4059, pruned_loss=0.1509, over 18281.00 frames. ], tot_loss[loss=0.3328, simple_loss=0.3825, pruned_loss=0.1415, over 2801154.48 frames. ], batch size: 57, lr: 4.35e-02, grad_scale: 8.0 2023-03-08 14:53:29,555 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=3944.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 14:53:45,920 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.95 vs. limit=2.0 2023-03-08 14:54:17,747 INFO [train.py:898] (3/4) Epoch 2, batch 350, loss[loss=0.4132, simple_loss=0.43, pruned_loss=0.1982, over 12154.00 frames. ], tot_loss[loss=0.3323, simple_loss=0.3828, pruned_loss=0.1409, over 2966939.44 frames. ], batch size: 129, lr: 4.34e-02, grad_scale: 8.0 2023-03-08 14:54:20,351 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=3987.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:54:26,778 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=3992.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 14:54:48,494 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.7087, 5.3601, 5.6254, 5.4753, 5.4701, 6.1610, 5.7897, 5.5057], device='cuda:3'), covar=tensor([0.0441, 0.0574, 0.0512, 0.0495, 0.1025, 0.0425, 0.0455, 0.1408], device='cuda:3'), in_proj_covar=tensor([0.0151, 0.0146, 0.0131, 0.0119, 0.0168, 0.0164, 0.0121, 0.0175], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0001, 0.0002], device='cuda:3') 2023-03-08 14:54:56,243 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.948e+02 5.340e+02 6.954e+02 8.950e+02 2.037e+03, threshold=1.391e+03, percent-clipped=3.0 2023-03-08 14:55:18,885 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=4032.0, num_to_drop=1, layers_to_drop={3} 2023-03-08 14:55:22,114 INFO [train.py:898] (3/4) Epoch 2, batch 400, loss[loss=0.3786, simple_loss=0.4188, pruned_loss=0.1692, over 18047.00 frames. ], tot_loss[loss=0.3314, simple_loss=0.3825, pruned_loss=0.1402, over 3107547.81 frames. ], batch size: 62, lr: 4.33e-02, grad_scale: 8.0 2023-03-08 14:55:38,918 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3574, 3.7011, 4.0492, 3.7770, 4.5012, 4.1618, 3.8852, 3.6546], device='cuda:3'), covar=tensor([0.0602, 0.0317, 0.0130, 0.0270, 0.0075, 0.0308, 0.0178, 0.0448], device='cuda:3'), in_proj_covar=tensor([0.0030, 0.0025, 0.0022, 0.0025, 0.0015, 0.0028, 0.0019, 0.0029], device='cuda:3'), out_proj_covar=tensor([1.9382e-05, 1.5557e-05, 1.2809e-05, 1.5328e-05, 8.9146e-06, 1.9082e-05, 1.1398e-05, 1.8393e-05], device='cuda:3') 2023-03-08 14:55:45,768 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7467, 4.8365, 3.4011, 4.4604, 4.7382, 2.9268, 4.4882, 4.0026], device='cuda:3'), covar=tensor([0.0067, 0.0052, 0.1196, 0.0060, 0.0034, 0.0725, 0.0205, 0.0312], device='cuda:3'), in_proj_covar=tensor([0.0046, 0.0042, 0.0114, 0.0047, 0.0039, 0.0081, 0.0072, 0.0068], device='cuda:3'), out_proj_covar=tensor([3.5813e-05, 3.4398e-05, 1.0324e-04, 3.6222e-05, 3.0999e-05, 7.3557e-05, 6.6774e-05, 6.4038e-05], device='cuda:3') 2023-03-08 14:56:22,114 INFO [train.py:898] (3/4) Epoch 2, batch 450, loss[loss=0.3499, simple_loss=0.393, pruned_loss=0.1534, over 15946.00 frames. ], tot_loss[loss=0.3277, simple_loss=0.3793, pruned_loss=0.1381, over 3226663.24 frames. ], batch size: 94, lr: 4.31e-02, grad_scale: 8.0 2023-03-08 14:56:56,420 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.730e+02 6.143e+02 7.834e+02 1.006e+03 1.697e+03, threshold=1.567e+03, percent-clipped=3.0 2023-03-08 14:56:57,880 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6995, 3.6658, 3.8441, 3.5293, 2.6783, 3.9956, 4.0782, 2.9464], device='cuda:3'), covar=tensor([0.0119, 0.0127, 0.0112, 0.0153, 0.0583, 0.0073, 0.0147, 0.0415], device='cuda:3'), in_proj_covar=tensor([0.0032, 0.0027, 0.0029, 0.0029, 0.0050, 0.0027, 0.0025, 0.0045], device='cuda:3'), out_proj_covar=tensor([3.0227e-05, 2.3231e-05, 2.4593e-05, 2.7534e-05, 4.3839e-05, 2.1552e-05, 2.2195e-05, 4.0356e-05], device='cuda:3') 2023-03-08 14:57:11,750 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7314, 3.7650, 3.7071, 3.4058, 2.3446, 3.8868, 4.0953, 2.7566], device='cuda:3'), covar=tensor([0.0117, 0.0110, 0.0110, 0.0166, 0.0681, 0.0092, 0.0095, 0.0441], device='cuda:3'), in_proj_covar=tensor([0.0032, 0.0028, 0.0029, 0.0030, 0.0051, 0.0028, 0.0025, 0.0046], device='cuda:3'), out_proj_covar=tensor([3.0532e-05, 2.3588e-05, 2.4867e-05, 2.7953e-05, 4.4449e-05, 2.1952e-05, 2.2528e-05, 4.0851e-05], device='cuda:3') 2023-03-08 14:57:21,305 INFO [train.py:898] (3/4) Epoch 2, batch 500, loss[loss=0.3322, simple_loss=0.3902, pruned_loss=0.1371, over 18485.00 frames. ], tot_loss[loss=0.3254, simple_loss=0.3774, pruned_loss=0.1366, over 3315928.22 frames. ], batch size: 59, lr: 4.30e-02, grad_scale: 8.0 2023-03-08 14:57:28,847 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=4141.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:57:54,759 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5399, 5.9452, 5.6791, 5.6235, 5.3953, 5.8055, 6.0273, 5.9169], device='cuda:3'), covar=tensor([0.0726, 0.0619, 0.0266, 0.0435, 0.1136, 0.0332, 0.0375, 0.0466], device='cuda:3'), in_proj_covar=tensor([0.0191, 0.0187, 0.0141, 0.0180, 0.0226, 0.0164, 0.0171, 0.0158], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 14:58:19,879 INFO [train.py:898] (3/4) Epoch 2, batch 550, loss[loss=0.3467, simple_loss=0.4024, pruned_loss=0.1455, over 17132.00 frames. ], tot_loss[loss=0.3253, simple_loss=0.3773, pruned_loss=0.1367, over 3386163.22 frames. ], batch size: 78, lr: 4.29e-02, grad_scale: 8.0 2023-03-08 14:58:27,022 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.94 vs. limit=2.0 2023-03-08 14:58:42,142 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=4202.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 14:58:42,246 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=4202.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 14:58:42,575 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=5.84 vs. limit=5.0 2023-03-08 14:58:56,104 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.937e+02 6.401e+02 7.607e+02 9.685e+02 1.732e+03, threshold=1.521e+03, percent-clipped=4.0 2023-03-08 14:59:20,552 INFO [train.py:898] (3/4) Epoch 2, batch 600, loss[loss=0.3452, simple_loss=0.3918, pruned_loss=0.1493, over 18394.00 frames. ], tot_loss[loss=0.3252, simple_loss=0.3773, pruned_loss=0.1365, over 3430684.03 frames. ], batch size: 52, lr: 4.28e-02, grad_scale: 8.0 2023-03-08 14:59:22,740 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.98 vs. limit=2.0 2023-03-08 14:59:40,186 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=4250.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 14:59:56,962 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0517, 4.8404, 3.3257, 4.5857, 4.8624, 3.1760, 4.6838, 4.1340], device='cuda:3'), covar=tensor([0.0062, 0.0082, 0.1516, 0.0074, 0.0054, 0.0659, 0.0178, 0.0375], device='cuda:3'), in_proj_covar=tensor([0.0049, 0.0043, 0.0118, 0.0049, 0.0043, 0.0084, 0.0076, 0.0072], device='cuda:3'), out_proj_covar=tensor([3.9159e-05, 3.7049e-05, 1.0607e-04, 3.9430e-05, 3.4873e-05, 7.7116e-05, 7.0727e-05, 6.8768e-05], device='cuda:3') 2023-03-08 15:00:17,452 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=4282.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:00:20,597 INFO [train.py:898] (3/4) Epoch 2, batch 650, loss[loss=0.315, simple_loss=0.3637, pruned_loss=0.1332, over 18547.00 frames. ], tot_loss[loss=0.3235, simple_loss=0.3759, pruned_loss=0.1355, over 3468678.74 frames. ], batch size: 49, lr: 4.27e-02, grad_scale: 8.0 2023-03-08 15:00:24,386 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=4288.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:00:37,077 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.86 vs. limit=5.0 2023-03-08 15:00:42,596 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9082, 4.8732, 3.7362, 4.7794, 4.9304, 2.9614, 4.5532, 4.3130], device='cuda:3'), covar=tensor([0.0063, 0.0076, 0.1066, 0.0071, 0.0036, 0.0763, 0.0243, 0.0331], device='cuda:3'), in_proj_covar=tensor([0.0050, 0.0043, 0.0122, 0.0050, 0.0044, 0.0088, 0.0078, 0.0074], device='cuda:3'), out_proj_covar=tensor([4.0674e-05, 3.8257e-05, 1.0916e-04, 4.0370e-05, 3.6273e-05, 8.0251e-05, 7.3317e-05, 7.1512e-05], device='cuda:3') 2023-03-08 15:00:54,988 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3203, 5.8252, 5.6058, 5.4260, 5.0647, 5.6033, 5.9022, 5.7463], device='cuda:3'), covar=tensor([0.0801, 0.0610, 0.0329, 0.0565, 0.1517, 0.0506, 0.0405, 0.0527], device='cuda:3'), in_proj_covar=tensor([0.0198, 0.0195, 0.0142, 0.0187, 0.0238, 0.0172, 0.0173, 0.0167], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 15:00:55,829 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.352e+02 6.066e+02 7.398e+02 9.036e+02 1.375e+03, threshold=1.480e+03, percent-clipped=0.0 2023-03-08 15:01:17,604 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=4332.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 15:01:20,628 INFO [train.py:898] (3/4) Epoch 2, batch 700, loss[loss=0.3453, simple_loss=0.3937, pruned_loss=0.1484, over 18333.00 frames. ], tot_loss[loss=0.3235, simple_loss=0.3758, pruned_loss=0.1356, over 3488810.74 frames. ], batch size: 56, lr: 4.26e-02, grad_scale: 8.0 2023-03-08 15:01:37,891 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=4349.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:02:14,424 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=4380.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 15:02:16,107 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.86 vs. limit=5.0 2023-03-08 15:02:19,780 INFO [train.py:898] (3/4) Epoch 2, batch 750, loss[loss=0.3796, simple_loss=0.4159, pruned_loss=0.1717, over 12261.00 frames. ], tot_loss[loss=0.324, simple_loss=0.3765, pruned_loss=0.1357, over 3520536.38 frames. ], batch size: 130, lr: 4.25e-02, grad_scale: 8.0 2023-03-08 15:02:54,768 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.989e+02 6.069e+02 8.253e+02 1.039e+03 2.142e+03, threshold=1.651e+03, percent-clipped=4.0 2023-03-08 15:03:04,332 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-08 15:03:19,456 INFO [train.py:898] (3/4) Epoch 2, batch 800, loss[loss=0.2588, simple_loss=0.3087, pruned_loss=0.1044, over 18396.00 frames. ], tot_loss[loss=0.3241, simple_loss=0.3761, pruned_loss=0.1361, over 3538349.29 frames. ], batch size: 42, lr: 4.24e-02, grad_scale: 8.0 2023-03-08 15:03:34,564 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.2296, 4.8479, 4.8711, 4.3510, 2.7051, 2.6608, 4.4706, 4.7825], device='cuda:3'), covar=tensor([0.0468, 0.0097, 0.0060, 0.0121, 0.0547, 0.0688, 0.0102, 0.0028], device='cuda:3'), in_proj_covar=tensor([0.0050, 0.0029, 0.0023, 0.0037, 0.0055, 0.0061, 0.0037, 0.0024], device='cuda:3'), out_proj_covar=tensor([6.2892e-05, 3.9232e-05, 2.6382e-05, 4.4156e-05, 6.5804e-05, 7.2997e-05, 4.5290e-05, 2.6852e-05], device='cuda:3') 2023-03-08 15:03:47,252 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8114, 3.7693, 3.8939, 3.5076, 3.2615, 2.6679, 2.9176, 2.3732], device='cuda:3'), covar=tensor([0.0435, 0.0370, 0.0171, 0.0151, 0.0322, 0.0492, 0.0355, 0.0443], device='cuda:3'), in_proj_covar=tensor([0.0027, 0.0026, 0.0027, 0.0025, 0.0033, 0.0025, 0.0029, 0.0030], device='cuda:3'), out_proj_covar=tensor([6.1421e-05, 6.9643e-05, 5.2155e-05, 5.6708e-05, 7.3877e-05, 5.5152e-05, 5.7482e-05, 6.6568e-05], device='cuda:3') 2023-03-08 15:04:09,391 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2571, 5.1338, 4.3938, 5.2983, 5.1230, 4.8180, 5.1780, 4.5602], device='cuda:3'), covar=tensor([0.0240, 0.0261, 0.1705, 0.0326, 0.0221, 0.0313, 0.0264, 0.0452], device='cuda:3'), in_proj_covar=tensor([0.0150, 0.0161, 0.0244, 0.0136, 0.0137, 0.0171, 0.0163, 0.0183], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-08 15:04:19,437 INFO [train.py:898] (3/4) Epoch 2, batch 850, loss[loss=0.3347, simple_loss=0.3932, pruned_loss=0.1381, over 18237.00 frames. ], tot_loss[loss=0.3212, simple_loss=0.374, pruned_loss=0.1342, over 3556866.47 frames. ], batch size: 60, lr: 4.23e-02, grad_scale: 8.0 2023-03-08 15:04:33,219 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=4497.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:04:46,387 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4359, 5.2139, 4.5540, 5.4733, 5.3173, 4.8009, 5.1827, 4.6131], device='cuda:3'), covar=tensor([0.0201, 0.0283, 0.1452, 0.0286, 0.0233, 0.0368, 0.0307, 0.0533], device='cuda:3'), in_proj_covar=tensor([0.0151, 0.0162, 0.0247, 0.0137, 0.0138, 0.0170, 0.0164, 0.0184], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-08 15:04:53,484 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.476e+02 6.096e+02 7.477e+02 9.236e+02 1.546e+03, threshold=1.495e+03, percent-clipped=0.0 2023-03-08 15:05:14,121 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1862, 5.1224, 3.9959, 5.1417, 5.0569, 4.7324, 5.0251, 4.6042], device='cuda:3'), covar=tensor([0.0383, 0.0378, 0.2148, 0.0658, 0.0375, 0.0425, 0.0397, 0.0441], device='cuda:3'), in_proj_covar=tensor([0.0151, 0.0161, 0.0245, 0.0136, 0.0136, 0.0167, 0.0162, 0.0181], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-08 15:05:18,310 INFO [train.py:898] (3/4) Epoch 2, batch 900, loss[loss=0.3528, simple_loss=0.4003, pruned_loss=0.1526, over 18093.00 frames. ], tot_loss[loss=0.32, simple_loss=0.3733, pruned_loss=0.1334, over 3574627.68 frames. ], batch size: 62, lr: 4.22e-02, grad_scale: 8.0 2023-03-08 15:06:14,740 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=4582.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:06:17,753 INFO [train.py:898] (3/4) Epoch 2, batch 950, loss[loss=0.3123, simple_loss=0.3701, pruned_loss=0.1272, over 18518.00 frames. ], tot_loss[loss=0.3202, simple_loss=0.3729, pruned_loss=0.1337, over 3573316.56 frames. ], batch size: 51, lr: 4.21e-02, grad_scale: 8.0 2023-03-08 15:06:52,449 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.369e+02 6.106e+02 7.676e+02 9.311e+02 1.838e+03, threshold=1.535e+03, percent-clipped=6.0 2023-03-08 15:07:12,185 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=4630.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:07:18,131 INFO [train.py:898] (3/4) Epoch 2, batch 1000, loss[loss=0.3329, simple_loss=0.387, pruned_loss=0.1394, over 17878.00 frames. ], tot_loss[loss=0.3184, simple_loss=0.3712, pruned_loss=0.1328, over 3573057.17 frames. ], batch size: 70, lr: 4.20e-02, grad_scale: 8.0 2023-03-08 15:07:28,741 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=4644.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:07:39,316 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7689, 4.4281, 4.4285, 4.2080, 3.5655, 4.4699, 4.4421, 4.3773], device='cuda:3'), covar=tensor([0.2509, 0.0976, 0.0979, 0.1026, 0.3095, 0.0769, 0.0963, 0.0818], device='cuda:3'), in_proj_covar=tensor([0.0213, 0.0207, 0.0159, 0.0204, 0.0269, 0.0188, 0.0195, 0.0176], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-08 15:08:17,974 INFO [train.py:898] (3/4) Epoch 2, batch 1050, loss[loss=0.2938, simple_loss=0.362, pruned_loss=0.1128, over 18621.00 frames. ], tot_loss[loss=0.3175, simple_loss=0.3709, pruned_loss=0.1321, over 3575162.64 frames. ], batch size: 52, lr: 4.19e-02, grad_scale: 8.0 2023-03-08 15:08:52,053 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.788e+02 5.479e+02 6.722e+02 8.127e+02 1.317e+03, threshold=1.344e+03, percent-clipped=0.0 2023-03-08 15:09:17,312 INFO [train.py:898] (3/4) Epoch 2, batch 1100, loss[loss=0.3502, simple_loss=0.4042, pruned_loss=0.148, over 16123.00 frames. ], tot_loss[loss=0.3158, simple_loss=0.3699, pruned_loss=0.1308, over 3571603.52 frames. ], batch size: 95, lr: 4.18e-02, grad_scale: 4.0 2023-03-08 15:09:32,463 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.51 vs. limit=2.0 2023-03-08 15:10:00,779 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4537, 3.4754, 3.9118, 3.2069, 3.5190, 4.3577, 4.3422, 3.6509], device='cuda:3'), covar=tensor([0.0670, 0.0608, 0.0164, 0.0840, 0.1847, 0.0100, 0.0204, 0.0483], device='cuda:3'), in_proj_covar=tensor([0.0054, 0.0054, 0.0042, 0.0057, 0.0098, 0.0040, 0.0046, 0.0049], device='cuda:3'), out_proj_covar=tensor([3.2318e-05, 3.4493e-05, 2.4836e-05, 3.6468e-05, 6.5593e-05, 2.2286e-05, 2.5885e-05, 2.8504e-05], device='cuda:3') 2023-03-08 15:10:06,258 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=2.05 vs. limit=2.0 2023-03-08 15:10:17,514 INFO [train.py:898] (3/4) Epoch 2, batch 1150, loss[loss=0.3186, simple_loss=0.3589, pruned_loss=0.1392, over 18259.00 frames. ], tot_loss[loss=0.3157, simple_loss=0.3696, pruned_loss=0.1309, over 3574622.85 frames. ], batch size: 47, lr: 4.17e-02, grad_scale: 4.0 2023-03-08 15:10:31,924 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=4797.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:10:52,315 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.612e+02 6.045e+02 7.965e+02 9.549e+02 1.896e+03, threshold=1.593e+03, percent-clipped=3.0 2023-03-08 15:11:16,899 INFO [train.py:898] (3/4) Epoch 2, batch 1200, loss[loss=0.3229, simple_loss=0.3768, pruned_loss=0.1346, over 18488.00 frames. ], tot_loss[loss=0.3163, simple_loss=0.3701, pruned_loss=0.1313, over 3581195.05 frames. ], batch size: 51, lr: 4.16e-02, grad_scale: 8.0 2023-03-08 15:11:20,949 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1255, 3.8866, 4.4308, 3.3213, 4.0386, 4.5993, 4.3330, 3.8520], device='cuda:3'), covar=tensor([0.0406, 0.0359, 0.0164, 0.0803, 0.1281, 0.0061, 0.0170, 0.0363], device='cuda:3'), in_proj_covar=tensor([0.0055, 0.0054, 0.0042, 0.0058, 0.0099, 0.0042, 0.0047, 0.0050], device='cuda:3'), out_proj_covar=tensor([3.3199e-05, 3.4432e-05, 2.4950e-05, 3.7862e-05, 6.6412e-05, 2.3049e-05, 2.6804e-05, 2.9809e-05], device='cuda:3') 2023-03-08 15:11:29,239 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=4845.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:12:16,085 INFO [train.py:898] (3/4) Epoch 2, batch 1250, loss[loss=0.27, simple_loss=0.3304, pruned_loss=0.1048, over 17582.00 frames. ], tot_loss[loss=0.3125, simple_loss=0.3673, pruned_loss=0.1289, over 3586157.18 frames. ], batch size: 39, lr: 4.15e-02, grad_scale: 8.0 2023-03-08 15:12:37,429 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2229, 5.0577, 4.2701, 5.2282, 5.1770, 4.5985, 5.1425, 4.4889], device='cuda:3'), covar=tensor([0.0264, 0.0352, 0.2101, 0.0436, 0.0248, 0.0408, 0.0316, 0.0577], device='cuda:3'), in_proj_covar=tensor([0.0158, 0.0171, 0.0270, 0.0144, 0.0145, 0.0179, 0.0170, 0.0190], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0004, 0.0002, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-08 15:12:51,971 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.909e+02 5.792e+02 6.888e+02 8.674e+02 1.521e+03, threshold=1.378e+03, percent-clipped=0.0 2023-03-08 15:13:15,905 INFO [train.py:898] (3/4) Epoch 2, batch 1300, loss[loss=0.4001, simple_loss=0.4215, pruned_loss=0.1894, over 12077.00 frames. ], tot_loss[loss=0.3123, simple_loss=0.367, pruned_loss=0.1289, over 3581865.70 frames. ], batch size: 130, lr: 4.14e-02, grad_scale: 8.0 2023-03-08 15:13:27,221 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=4944.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:14:10,318 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.84 vs. limit=2.0 2023-03-08 15:14:15,426 INFO [train.py:898] (3/4) Epoch 2, batch 1350, loss[loss=0.2786, simple_loss=0.3426, pruned_loss=0.1073, over 18375.00 frames. ], tot_loss[loss=0.3102, simple_loss=0.3652, pruned_loss=0.1276, over 3591367.80 frames. ], batch size: 46, lr: 4.13e-02, grad_scale: 8.0 2023-03-08 15:14:24,037 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=4992.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:14:51,638 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.370e+02 5.891e+02 6.846e+02 8.415e+02 1.418e+03, threshold=1.369e+03, percent-clipped=1.0 2023-03-08 15:15:15,343 INFO [train.py:898] (3/4) Epoch 2, batch 1400, loss[loss=0.3021, simple_loss=0.3589, pruned_loss=0.1226, over 18280.00 frames. ], tot_loss[loss=0.3102, simple_loss=0.3653, pruned_loss=0.1276, over 3590297.04 frames. ], batch size: 49, lr: 4.12e-02, grad_scale: 8.0 2023-03-08 15:16:14,511 INFO [train.py:898] (3/4) Epoch 2, batch 1450, loss[loss=0.3103, simple_loss=0.3724, pruned_loss=0.1241, over 18478.00 frames. ], tot_loss[loss=0.3096, simple_loss=0.3648, pruned_loss=0.1272, over 3595116.45 frames. ], batch size: 59, lr: 4.11e-02, grad_scale: 8.0 2023-03-08 15:16:50,484 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.583e+02 6.040e+02 7.064e+02 8.729e+02 1.298e+03, threshold=1.413e+03, percent-clipped=0.0 2023-03-08 15:17:13,680 INFO [train.py:898] (3/4) Epoch 2, batch 1500, loss[loss=0.2696, simple_loss=0.3295, pruned_loss=0.1048, over 18522.00 frames. ], tot_loss[loss=0.3089, simple_loss=0.3641, pruned_loss=0.1268, over 3590143.18 frames. ], batch size: 47, lr: 4.10e-02, grad_scale: 8.0 2023-03-08 15:17:30,372 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1796, 3.7243, 1.8467, 3.8689, 3.5076, 4.5321, 1.9706, 4.3003], device='cuda:3'), covar=tensor([0.0148, 0.0719, 0.1690, 0.0357, 0.0531, 0.0053, 0.1492, 0.0187], device='cuda:3'), in_proj_covar=tensor([0.0058, 0.0094, 0.0095, 0.0046, 0.0080, 0.0046, 0.0092, 0.0077], device='cuda:3'), out_proj_covar=tensor([5.6778e-05, 1.0166e-04, 1.0467e-04, 5.9356e-05, 8.2498e-05, 4.2947e-05, 9.5187e-05, 7.6202e-05], device='cuda:3') 2023-03-08 15:17:49,205 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3120, 5.1780, 5.1433, 5.3796, 5.2151, 5.8055, 5.4562, 5.2965], device='cuda:3'), covar=tensor([0.0619, 0.0606, 0.0718, 0.0505, 0.1113, 0.0592, 0.0511, 0.1300], device='cuda:3'), in_proj_covar=tensor([0.0170, 0.0144, 0.0142, 0.0125, 0.0180, 0.0178, 0.0129, 0.0182], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0001, 0.0002], device='cuda:3') 2023-03-08 15:18:11,879 INFO [train.py:898] (3/4) Epoch 2, batch 1550, loss[loss=0.3702, simple_loss=0.4076, pruned_loss=0.1664, over 16265.00 frames. ], tot_loss[loss=0.3102, simple_loss=0.3652, pruned_loss=0.1276, over 3590037.04 frames. ], batch size: 94, lr: 4.08e-02, grad_scale: 8.0 2023-03-08 15:18:20,894 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3653, 3.3745, 2.3510, 3.0960, 3.1604, 3.1232, 2.7580, 3.2503], device='cuda:3'), covar=tensor([0.0408, 0.0285, 0.0544, 0.0233, 0.0536, 0.0472, 0.0233, 0.0116], device='cuda:3'), in_proj_covar=tensor([0.0056, 0.0045, 0.0079, 0.0062, 0.0054, 0.0045, 0.0052, 0.0055], device='cuda:3'), out_proj_covar=tensor([7.8252e-05, 6.4388e-05, 1.0583e-04, 7.5665e-05, 7.8127e-05, 6.1036e-05, 6.5028e-05, 6.0354e-05], device='cuda:3') 2023-03-08 15:18:48,960 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.611e+02 6.188e+02 7.756e+02 9.428e+02 1.707e+03, threshold=1.551e+03, percent-clipped=5.0 2023-03-08 15:19:11,986 INFO [train.py:898] (3/4) Epoch 2, batch 1600, loss[loss=0.2766, simple_loss=0.3254, pruned_loss=0.1139, over 17723.00 frames. ], tot_loss[loss=0.3121, simple_loss=0.367, pruned_loss=0.1286, over 3574890.12 frames. ], batch size: 39, lr: 4.07e-02, grad_scale: 8.0 2023-03-08 15:19:35,811 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=2.03 vs. limit=2.0 2023-03-08 15:19:49,137 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.76 vs. limit=2.0 2023-03-08 15:20:11,418 INFO [train.py:898] (3/4) Epoch 2, batch 1650, loss[loss=0.272, simple_loss=0.3374, pruned_loss=0.1033, over 18557.00 frames. ], tot_loss[loss=0.3113, simple_loss=0.3667, pruned_loss=0.128, over 3585045.64 frames. ], batch size: 49, lr: 4.06e-02, grad_scale: 8.0 2023-03-08 15:20:23,721 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0082, 4.1697, 4.1444, 3.9572, 2.7911, 4.6091, 3.8899, 2.5519], device='cuda:3'), covar=tensor([0.0133, 0.0109, 0.0064, 0.0122, 0.0743, 0.0043, 0.0124, 0.0711], device='cuda:3'), in_proj_covar=tensor([0.0047, 0.0043, 0.0041, 0.0042, 0.0079, 0.0040, 0.0036, 0.0077], device='cuda:3'), out_proj_covar=tensor([4.5656e-05, 3.8755e-05, 3.9594e-05, 4.1731e-05, 7.4182e-05, 3.4666e-05, 3.7614e-05, 7.3774e-05], device='cuda:3') 2023-03-08 15:20:41,376 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8072, 3.2964, 1.7087, 3.8407, 2.7168, 4.1006, 1.9432, 3.8242], device='cuda:3'), covar=tensor([0.0198, 0.0861, 0.2057, 0.0369, 0.0924, 0.0078, 0.1663, 0.0314], device='cuda:3'), in_proj_covar=tensor([0.0062, 0.0098, 0.0100, 0.0052, 0.0089, 0.0051, 0.0096, 0.0082], device='cuda:3'), out_proj_covar=tensor([6.1812e-05, 1.0684e-04, 1.1098e-04, 6.6223e-05, 9.4076e-05, 4.7759e-05, 9.9681e-05, 8.2132e-05], device='cuda:3') 2023-03-08 15:20:47,174 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.457e+02 5.814e+02 6.706e+02 8.624e+02 1.400e+03, threshold=1.341e+03, percent-clipped=0.0 2023-03-08 15:21:10,522 INFO [train.py:898] (3/4) Epoch 2, batch 1700, loss[loss=0.2981, simple_loss=0.3678, pruned_loss=0.1141, over 18223.00 frames. ], tot_loss[loss=0.3102, simple_loss=0.3659, pruned_loss=0.1273, over 3580080.85 frames. ], batch size: 60, lr: 4.05e-02, grad_scale: 8.0 2023-03-08 15:21:43,995 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3527, 3.4178, 2.5506, 3.1660, 3.1243, 3.0679, 2.6394, 3.2303], device='cuda:3'), covar=tensor([0.0504, 0.0244, 0.0834, 0.0316, 0.0569, 0.0633, 0.0418, 0.0150], device='cuda:3'), in_proj_covar=tensor([0.0056, 0.0047, 0.0080, 0.0064, 0.0055, 0.0044, 0.0051, 0.0053], device='cuda:3'), out_proj_covar=tensor([7.8912e-05, 6.6795e-05, 1.0834e-04, 7.8275e-05, 8.0813e-05, 5.9938e-05, 6.6323e-05, 6.1119e-05], device='cuda:3') 2023-03-08 15:22:10,168 INFO [train.py:898] (3/4) Epoch 2, batch 1750, loss[loss=0.2908, simple_loss=0.3453, pruned_loss=0.1182, over 18229.00 frames. ], tot_loss[loss=0.3093, simple_loss=0.3652, pruned_loss=0.1267, over 3575610.83 frames. ], batch size: 45, lr: 4.04e-02, grad_scale: 8.0 2023-03-08 15:22:46,219 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.642e+02 5.505e+02 6.727e+02 8.573e+02 1.772e+03, threshold=1.345e+03, percent-clipped=4.0 2023-03-08 15:23:10,300 INFO [train.py:898] (3/4) Epoch 2, batch 1800, loss[loss=0.3462, simple_loss=0.3948, pruned_loss=0.1488, over 18245.00 frames. ], tot_loss[loss=0.3087, simple_loss=0.3646, pruned_loss=0.1264, over 3568817.88 frames. ], batch size: 60, lr: 4.03e-02, grad_scale: 8.0 2023-03-08 15:23:15,373 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8077, 5.0113, 2.7860, 4.6715, 4.9280, 2.8245, 4.4912, 3.6811], device='cuda:3'), covar=tensor([0.0070, 0.0116, 0.1881, 0.0156, 0.0052, 0.1190, 0.0407, 0.0867], device='cuda:3'), in_proj_covar=tensor([0.0062, 0.0058, 0.0154, 0.0078, 0.0056, 0.0124, 0.0113, 0.0106], device='cuda:3'), out_proj_covar=tensor([5.8442e-05, 6.7602e-05, 1.4021e-04, 7.2687e-05, 5.4005e-05, 1.1797e-04, 1.1235e-04, 1.1110e-04], device='cuda:3') 2023-03-08 15:23:33,241 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7676, 3.5845, 4.0942, 3.0172, 3.5405, 3.8596, 3.9671, 3.9208], device='cuda:3'), covar=tensor([0.0341, 0.0356, 0.0117, 0.0633, 0.1154, 0.0063, 0.0164, 0.0232], device='cuda:3'), in_proj_covar=tensor([0.0065, 0.0063, 0.0046, 0.0067, 0.0121, 0.0049, 0.0055, 0.0059], device='cuda:3'), out_proj_covar=tensor([4.2948e-05, 4.2715e-05, 3.0193e-05, 4.6364e-05, 8.4222e-05, 3.0026e-05, 3.3890e-05, 3.8247e-05], device='cuda:3') 2023-03-08 15:24:08,806 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4566, 4.2551, 4.6340, 4.5130, 4.1901, 4.3444, 4.8055, 4.8384], device='cuda:3'), covar=tensor([0.0127, 0.0215, 0.0129, 0.0111, 0.0199, 0.0145, 0.0119, 0.0121], device='cuda:3'), in_proj_covar=tensor([0.0059, 0.0049, 0.0047, 0.0054, 0.0053, 0.0060, 0.0052, 0.0049], device='cuda:3'), out_proj_covar=tensor([1.4001e-04, 9.8649e-05, 9.4184e-05, 1.0111e-04, 1.1421e-04, 1.3779e-04, 1.0670e-04, 9.8705e-05], device='cuda:3') 2023-03-08 15:24:09,615 INFO [train.py:898] (3/4) Epoch 2, batch 1850, loss[loss=0.3436, simple_loss=0.3927, pruned_loss=0.1472, over 18261.00 frames. ], tot_loss[loss=0.3074, simple_loss=0.3637, pruned_loss=0.1255, over 3579804.92 frames. ], batch size: 60, lr: 4.02e-02, grad_scale: 8.0 2023-03-08 15:24:42,314 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=5.57 vs. limit=5.0 2023-03-08 15:24:45,051 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.702e+02 5.666e+02 6.741e+02 8.273e+02 2.014e+03, threshold=1.348e+03, percent-clipped=2.0 2023-03-08 15:25:08,313 INFO [train.py:898] (3/4) Epoch 2, batch 1900, loss[loss=0.2979, simple_loss=0.3654, pruned_loss=0.1152, over 18379.00 frames. ], tot_loss[loss=0.3082, simple_loss=0.3646, pruned_loss=0.1259, over 3572496.93 frames. ], batch size: 55, lr: 4.01e-02, grad_scale: 8.0 2023-03-08 15:25:51,989 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([1.8281, 4.2727, 4.4892, 2.9669, 2.8753, 2.5171, 3.8793, 4.1341], device='cuda:3'), covar=tensor([0.0971, 0.0248, 0.0044, 0.0362, 0.0841, 0.1103, 0.0219, 0.0045], device='cuda:3'), in_proj_covar=tensor([0.0076, 0.0036, 0.0032, 0.0055, 0.0085, 0.0094, 0.0058, 0.0032], device='cuda:3'), out_proj_covar=tensor([1.0791e-04, 6.1789e-05, 4.4905e-05, 7.5827e-05, 1.1068e-04, 1.2387e-04, 7.8993e-05, 4.3634e-05], device='cuda:3') 2023-03-08 15:26:07,462 INFO [train.py:898] (3/4) Epoch 2, batch 1950, loss[loss=0.2739, simple_loss=0.3336, pruned_loss=0.1071, over 18377.00 frames. ], tot_loss[loss=0.3073, simple_loss=0.3638, pruned_loss=0.1254, over 3580993.58 frames. ], batch size: 46, lr: 4.00e-02, grad_scale: 8.0 2023-03-08 15:26:42,648 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.871e+02 5.827e+02 7.380e+02 9.024e+02 1.685e+03, threshold=1.476e+03, percent-clipped=2.0 2023-03-08 15:26:47,979 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.92 vs. limit=2.0 2023-03-08 15:26:58,180 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2602, 3.9965, 4.4120, 2.7577, 4.1118, 4.2662, 4.0943, 3.7364], device='cuda:3'), covar=tensor([0.0475, 0.0239, 0.0077, 0.0454, 0.0147, 0.0412, 0.0301, 0.0492], device='cuda:3'), in_proj_covar=tensor([0.0060, 0.0050, 0.0036, 0.0050, 0.0037, 0.0072, 0.0033, 0.0063], device='cuda:3'), out_proj_covar=tensor([4.8532e-05, 4.0351e-05, 2.7315e-05, 4.1168e-05, 3.1582e-05, 5.8383e-05, 2.9334e-05, 4.9792e-05], device='cuda:3') 2023-03-08 15:27:06,891 INFO [train.py:898] (3/4) Epoch 2, batch 2000, loss[loss=0.2905, simple_loss=0.3451, pruned_loss=0.1179, over 18550.00 frames. ], tot_loss[loss=0.3069, simple_loss=0.3636, pruned_loss=0.1251, over 3578988.84 frames. ], batch size: 49, lr: 3.99e-02, grad_scale: 8.0 2023-03-08 15:27:18,484 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8056, 3.5183, 4.2489, 3.1332, 4.0426, 3.9619, 3.8322, 3.2768], device='cuda:3'), covar=tensor([0.0577, 0.0293, 0.0088, 0.0377, 0.0137, 0.0422, 0.0304, 0.0602], device='cuda:3'), in_proj_covar=tensor([0.0058, 0.0049, 0.0035, 0.0049, 0.0036, 0.0071, 0.0032, 0.0062], device='cuda:3'), out_proj_covar=tensor([4.7639e-05, 3.9617e-05, 2.6778e-05, 4.0421e-05, 3.1211e-05, 5.7283e-05, 2.8740e-05, 4.8896e-05], device='cuda:3') 2023-03-08 15:27:24,051 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=5650.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 15:27:46,961 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.6846, 5.1810, 5.5846, 5.4659, 5.4922, 6.1713, 5.6143, 5.5357], device='cuda:3'), covar=tensor([0.0572, 0.0591, 0.0564, 0.0449, 0.1011, 0.0534, 0.0654, 0.1375], device='cuda:3'), in_proj_covar=tensor([0.0173, 0.0141, 0.0139, 0.0128, 0.0183, 0.0186, 0.0129, 0.0190], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0001, 0.0002], device='cuda:3') 2023-03-08 15:27:48,990 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.73 vs. limit=2.0 2023-03-08 15:27:53,255 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.88 vs. limit=2.0 2023-03-08 15:28:03,794 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4769, 3.2732, 2.5015, 3.2278, 2.9964, 2.8906, 2.8605, 3.2663], device='cuda:3'), covar=tensor([0.0171, 0.0199, 0.0311, 0.0169, 0.0289, 0.0240, 0.0262, 0.0665], device='cuda:3'), in_proj_covar=tensor([0.0038, 0.0040, 0.0039, 0.0039, 0.0038, 0.0052, 0.0055, 0.0037], device='cuda:3'), out_proj_covar=tensor([4.8114e-05, 4.9864e-05, 5.6731e-05, 4.7138e-05, 5.0495e-05, 6.3979e-05, 6.5729e-05, 5.4326e-05], device='cuda:3') 2023-03-08 15:28:05,538 INFO [train.py:898] (3/4) Epoch 2, batch 2050, loss[loss=0.2768, simple_loss=0.3376, pruned_loss=0.1081, over 18259.00 frames. ], tot_loss[loss=0.3049, simple_loss=0.362, pruned_loss=0.1239, over 3583912.08 frames. ], batch size: 47, lr: 3.98e-02, grad_scale: 8.0 2023-03-08 15:28:35,737 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=5711.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 15:28:42,463 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.372e+02 5.884e+02 6.946e+02 9.338e+02 2.315e+03, threshold=1.389e+03, percent-clipped=7.0 2023-03-08 15:29:03,510 INFO [train.py:898] (3/4) Epoch 2, batch 2100, loss[loss=0.2878, simple_loss=0.3411, pruned_loss=0.1172, over 18506.00 frames. ], tot_loss[loss=0.305, simple_loss=0.3621, pruned_loss=0.124, over 3580625.11 frames. ], batch size: 47, lr: 3.97e-02, grad_scale: 2.0 2023-03-08 15:29:27,931 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=5755.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:29:49,900 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.91 vs. limit=2.0 2023-03-08 15:30:02,854 INFO [train.py:898] (3/4) Epoch 2, batch 2150, loss[loss=0.2868, simple_loss=0.3484, pruned_loss=0.1126, over 18299.00 frames. ], tot_loss[loss=0.3051, simple_loss=0.3626, pruned_loss=0.1237, over 3580905.95 frames. ], batch size: 49, lr: 3.96e-02, grad_scale: 2.0 2023-03-08 15:30:05,424 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0276, 4.9691, 5.1020, 4.6815, 4.4555, 4.6755, 4.4093, 4.6117], device='cuda:3'), covar=tensor([0.0420, 0.0324, 0.0176, 0.0233, 0.0660, 0.0302, 0.0681, 0.0384], device='cuda:3'), in_proj_covar=tensor([0.0087, 0.0107, 0.0094, 0.0088, 0.0108, 0.0109, 0.0132, 0.0103], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-08 15:30:24,054 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7044, 5.0841, 3.4654, 4.5177, 4.7681, 4.9994, 4.9260, 2.7482], device='cuda:3'), covar=tensor([0.0247, 0.0049, 0.0359, 0.0076, 0.0064, 0.0097, 0.0104, 0.0888], device='cuda:3'), in_proj_covar=tensor([0.0042, 0.0024, 0.0043, 0.0029, 0.0032, 0.0028, 0.0033, 0.0056], device='cuda:3'), out_proj_covar=tensor([1.2204e-04, 7.3430e-05, 1.2507e-04, 9.4495e-05, 8.2648e-05, 7.9707e-05, 8.9335e-05, 1.3970e-04], device='cuda:3') 2023-03-08 15:30:33,461 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6899, 4.9712, 3.3825, 4.4452, 4.6747, 4.8530, 4.7709, 2.8502], device='cuda:3'), covar=tensor([0.0262, 0.0082, 0.0407, 0.0109, 0.0134, 0.0124, 0.0141, 0.0864], device='cuda:3'), in_proj_covar=tensor([0.0042, 0.0024, 0.0043, 0.0029, 0.0033, 0.0028, 0.0033, 0.0056], device='cuda:3'), out_proj_covar=tensor([1.2264e-04, 7.3996e-05, 1.2591e-04, 9.5273e-05, 8.3574e-05, 8.0351e-05, 8.9743e-05, 1.4047e-04], device='cuda:3') 2023-03-08 15:30:40,460 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=5816.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:30:41,163 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.344e+02 5.674e+02 6.514e+02 8.678e+02 1.455e+03, threshold=1.303e+03, percent-clipped=1.0 2023-03-08 15:31:01,979 INFO [train.py:898] (3/4) Epoch 2, batch 2200, loss[loss=0.3167, simple_loss=0.3777, pruned_loss=0.1278, over 18350.00 frames. ], tot_loss[loss=0.3056, simple_loss=0.363, pruned_loss=0.1241, over 3575612.23 frames. ], batch size: 56, lr: 3.95e-02, grad_scale: 2.0 2023-03-08 15:31:29,448 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=5858.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:32:00,586 INFO [train.py:898] (3/4) Epoch 2, batch 2250, loss[loss=0.313, simple_loss=0.3701, pruned_loss=0.1279, over 18300.00 frames. ], tot_loss[loss=0.3051, simple_loss=0.3629, pruned_loss=0.1237, over 3589899.79 frames. ], batch size: 57, lr: 3.95e-02, grad_scale: 2.0 2023-03-08 15:32:31,250 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=2.12 vs. limit=2.0 2023-03-08 15:32:38,784 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.697e+02 5.715e+02 7.226e+02 9.109e+02 2.021e+03, threshold=1.445e+03, percent-clipped=5.0 2023-03-08 15:32:41,401 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=5919.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:32:59,628 INFO [train.py:898] (3/4) Epoch 2, batch 2300, loss[loss=0.2754, simple_loss=0.3451, pruned_loss=0.1029, over 18408.00 frames. ], tot_loss[loss=0.3035, simple_loss=0.3612, pruned_loss=0.1229, over 3586854.40 frames. ], batch size: 48, lr: 3.94e-02, grad_scale: 2.0 2023-03-08 15:33:58,222 INFO [train.py:898] (3/4) Epoch 2, batch 2350, loss[loss=0.2528, simple_loss=0.3188, pruned_loss=0.09341, over 18169.00 frames. ], tot_loss[loss=0.3029, simple_loss=0.3608, pruned_loss=0.1225, over 3585881.99 frames. ], batch size: 44, lr: 3.93e-02, grad_scale: 2.0 2023-03-08 15:34:04,294 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=5990.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:34:12,496 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.50 vs. limit=2.0 2023-03-08 15:34:27,109 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6006.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 15:34:39,272 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.355e+02 5.403e+02 7.312e+02 8.931e+02 1.402e+03, threshold=1.462e+03, percent-clipped=0.0 2023-03-08 15:35:00,407 INFO [train.py:898] (3/4) Epoch 2, batch 2400, loss[loss=0.2837, simple_loss=0.3522, pruned_loss=0.1076, over 18498.00 frames. ], tot_loss[loss=0.306, simple_loss=0.3631, pruned_loss=0.1244, over 3561941.47 frames. ], batch size: 53, lr: 3.92e-02, grad_scale: 4.0 2023-03-08 15:35:07,922 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9919, 3.4130, 3.5867, 2.9906, 3.5755, 3.4897, 3.5427, 2.9639], device='cuda:3'), covar=tensor([0.0412, 0.0206, 0.0068, 0.0236, 0.0144, 0.0391, 0.0196, 0.0467], device='cuda:3'), in_proj_covar=tensor([0.0067, 0.0061, 0.0040, 0.0059, 0.0046, 0.0086, 0.0039, 0.0077], device='cuda:3'), out_proj_covar=tensor([5.7817e-05, 5.1700e-05, 3.2615e-05, 5.1228e-05, 4.1815e-05, 7.2823e-05, 3.7634e-05, 6.2962e-05], device='cuda:3') 2023-03-08 15:35:10,258 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6043.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:35:19,713 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6051.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:35:59,380 INFO [train.py:898] (3/4) Epoch 2, batch 2450, loss[loss=0.309, simple_loss=0.3704, pruned_loss=0.1237, over 17103.00 frames. ], tot_loss[loss=0.3045, simple_loss=0.362, pruned_loss=0.1235, over 3563261.53 frames. ], batch size: 78, lr: 3.91e-02, grad_scale: 4.0 2023-03-08 15:36:20,712 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.54 vs. limit=2.0 2023-03-08 15:36:23,079 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6104.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:36:30,701 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6111.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:36:37,238 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.501e+02 5.639e+02 7.028e+02 8.696e+02 2.308e+03, threshold=1.406e+03, percent-clipped=2.0 2023-03-08 15:36:58,329 INFO [train.py:898] (3/4) Epoch 2, batch 2500, loss[loss=0.3351, simple_loss=0.3927, pruned_loss=0.1388, over 18473.00 frames. ], tot_loss[loss=0.3038, simple_loss=0.3618, pruned_loss=0.1229, over 3565849.12 frames. ], batch size: 59, lr: 3.90e-02, grad_scale: 4.0 2023-03-08 15:36:58,667 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6135.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:37:08,386 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8322, 5.1609, 3.4477, 4.6348, 4.7456, 5.0263, 4.9000, 2.6305], device='cuda:3'), covar=tensor([0.0182, 0.0077, 0.0384, 0.0066, 0.0113, 0.0093, 0.0169, 0.1117], device='cuda:3'), in_proj_covar=tensor([0.0043, 0.0026, 0.0048, 0.0029, 0.0034, 0.0028, 0.0035, 0.0061], device='cuda:3'), out_proj_covar=tensor([1.3170e-04, 8.1133e-05, 1.4354e-04, 1.0248e-04, 9.4576e-05, 8.5378e-05, 1.0279e-04, 1.6034e-04], device='cuda:3') 2023-03-08 15:37:39,830 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6170.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:37:56,479 INFO [train.py:898] (3/4) Epoch 2, batch 2550, loss[loss=0.271, simple_loss=0.3251, pruned_loss=0.1085, over 16719.00 frames. ], tot_loss[loss=0.3044, simple_loss=0.3622, pruned_loss=0.1233, over 3561957.99 frames. ], batch size: 37, lr: 3.89e-02, grad_scale: 4.0 2023-03-08 15:38:10,594 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6196.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:38:18,877 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6203.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:38:29,044 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6211.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:38:32,250 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6214.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:38:35,435 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.309e+02 5.777e+02 7.159e+02 8.638e+02 1.810e+03, threshold=1.432e+03, percent-clipped=8.0 2023-03-08 15:38:51,790 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6231.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:38:55,992 INFO [train.py:898] (3/4) Epoch 2, batch 2600, loss[loss=0.3133, simple_loss=0.3761, pruned_loss=0.1253, over 18149.00 frames. ], tot_loss[loss=0.3014, simple_loss=0.3601, pruned_loss=0.1213, over 3585582.09 frames. ], batch size: 62, lr: 3.88e-02, grad_scale: 4.0 2023-03-08 15:39:31,873 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6264.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:39:34,031 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6266.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:39:40,827 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6272.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:39:53,751 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.85 vs. limit=2.0 2023-03-08 15:39:55,317 INFO [train.py:898] (3/4) Epoch 2, batch 2650, loss[loss=0.2672, simple_loss=0.3316, pruned_loss=0.1014, over 18357.00 frames. ], tot_loss[loss=0.3011, simple_loss=0.3597, pruned_loss=0.1213, over 3580485.22 frames. ], batch size: 46, lr: 3.87e-02, grad_scale: 4.0 2023-03-08 15:40:20,598 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6306.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 15:40:33,294 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.735e+02 5.474e+02 6.730e+02 8.945e+02 2.130e+03, threshold=1.346e+03, percent-clipped=7.0 2023-03-08 15:40:45,860 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6327.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:40:54,509 INFO [train.py:898] (3/4) Epoch 2, batch 2700, loss[loss=0.2923, simple_loss=0.3586, pruned_loss=0.113, over 18354.00 frames. ], tot_loss[loss=0.301, simple_loss=0.3596, pruned_loss=0.1212, over 3588155.47 frames. ], batch size: 55, lr: 3.86e-02, grad_scale: 4.0 2023-03-08 15:41:07,702 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6346.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:41:17,135 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=6354.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 15:41:53,013 INFO [train.py:898] (3/4) Epoch 2, batch 2750, loss[loss=0.3369, simple_loss=0.3956, pruned_loss=0.1391, over 17757.00 frames. ], tot_loss[loss=0.3029, simple_loss=0.3609, pruned_loss=0.1224, over 3577973.72 frames. ], batch size: 70, lr: 3.85e-02, grad_scale: 4.0 2023-03-08 15:42:10,003 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6399.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:42:19,654 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9628, 4.8284, 5.0162, 5.1235, 5.0358, 5.6176, 5.2189, 5.0661], device='cuda:3'), covar=tensor([0.0663, 0.0582, 0.0629, 0.0507, 0.0993, 0.0570, 0.0573, 0.1197], device='cuda:3'), in_proj_covar=tensor([0.0177, 0.0145, 0.0143, 0.0135, 0.0183, 0.0194, 0.0134, 0.0197], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 15:42:24,370 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6411.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:42:31,296 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.682e+02 5.887e+02 7.195e+02 9.085e+02 1.925e+03, threshold=1.439e+03, percent-clipped=3.0 2023-03-08 15:42:52,442 INFO [train.py:898] (3/4) Epoch 2, batch 2800, loss[loss=0.2704, simple_loss=0.329, pruned_loss=0.1059, over 18253.00 frames. ], tot_loss[loss=0.301, simple_loss=0.3591, pruned_loss=0.1215, over 3574459.69 frames. ], batch size: 47, lr: 3.84e-02, grad_scale: 8.0 2023-03-08 15:43:21,452 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=6459.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:43:25,099 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6462.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:43:31,000 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6467.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:43:51,661 INFO [train.py:898] (3/4) Epoch 2, batch 2850, loss[loss=0.2977, simple_loss=0.3632, pruned_loss=0.1161, over 18325.00 frames. ], tot_loss[loss=0.3003, simple_loss=0.3589, pruned_loss=0.1209, over 3581301.73 frames. ], batch size: 54, lr: 3.83e-02, grad_scale: 8.0 2023-03-08 15:43:58,845 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6491.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:44:26,181 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6514.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:44:29,244 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.462e+02 5.681e+02 6.833e+02 8.085e+02 2.080e+03, threshold=1.367e+03, percent-clipped=2.0 2023-03-08 15:44:37,111 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6523.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 15:44:40,173 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6526.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:44:43,140 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6528.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:44:50,784 INFO [train.py:898] (3/4) Epoch 2, batch 2900, loss[loss=0.2866, simple_loss=0.3491, pruned_loss=0.1121, over 18497.00 frames. ], tot_loss[loss=0.2996, simple_loss=0.3582, pruned_loss=0.1205, over 3581175.30 frames. ], batch size: 53, lr: 3.82e-02, grad_scale: 8.0 2023-03-08 15:45:19,416 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6559.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:45:22,841 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=6562.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:45:28,571 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6567.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:45:49,894 INFO [train.py:898] (3/4) Epoch 2, batch 2950, loss[loss=0.25, simple_loss=0.3221, pruned_loss=0.08889, over 18263.00 frames. ], tot_loss[loss=0.2973, simple_loss=0.3569, pruned_loss=0.1189, over 3589868.76 frames. ], batch size: 49, lr: 3.81e-02, grad_scale: 8.0 2023-03-08 15:46:27,984 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.426e+02 5.603e+02 6.658e+02 8.574e+02 1.720e+03, threshold=1.332e+03, percent-clipped=3.0 2023-03-08 15:46:33,939 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6622.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:46:42,537 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6629.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:46:49,586 INFO [train.py:898] (3/4) Epoch 2, batch 3000, loss[loss=0.2817, simple_loss=0.3568, pruned_loss=0.1033, over 18562.00 frames. ], tot_loss[loss=0.2971, simple_loss=0.3572, pruned_loss=0.1185, over 3596330.86 frames. ], batch size: 54, lr: 3.80e-02, grad_scale: 8.0 2023-03-08 15:46:49,586 INFO [train.py:923] (3/4) Computing validation loss 2023-03-08 15:47:01,155 INFO [train.py:932] (3/4) Epoch 2, validation: loss=0.2202, simple_loss=0.3188, pruned_loss=0.06074, over 944034.00 frames. 2023-03-08 15:47:01,156 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19451MB 2023-03-08 15:47:05,158 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.75 vs. limit=5.0 2023-03-08 15:47:15,040 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6646.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:47:15,459 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.42 vs. limit=5.0 2023-03-08 15:47:59,750 INFO [train.py:898] (3/4) Epoch 2, batch 3050, loss[loss=0.2884, simple_loss=0.3495, pruned_loss=0.1136, over 18488.00 frames. ], tot_loss[loss=0.2978, simple_loss=0.3574, pruned_loss=0.1191, over 3588531.11 frames. ], batch size: 47, lr: 3.79e-02, grad_scale: 8.0 2023-03-08 15:48:00,084 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6751, 4.3363, 4.7612, 4.5570, 4.4989, 4.3468, 4.9488, 4.9286], device='cuda:3'), covar=tensor([0.0091, 0.0189, 0.0124, 0.0124, 0.0133, 0.0152, 0.0132, 0.0138], device='cuda:3'), in_proj_covar=tensor([0.0060, 0.0048, 0.0046, 0.0054, 0.0051, 0.0060, 0.0051, 0.0044], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0001, 0.0001, 0.0001, 0.0001, 0.0002, 0.0001, 0.0001], device='cuda:3') 2023-03-08 15:48:05,942 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6690.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:48:10,431 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=6694.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:48:17,432 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6699.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:48:37,955 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.591e+02 5.592e+02 6.340e+02 8.473e+02 1.907e+03, threshold=1.268e+03, percent-clipped=6.0 2023-03-08 15:48:49,484 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.92 vs. limit=2.0 2023-03-08 15:48:58,812 INFO [train.py:898] (3/4) Epoch 2, batch 3100, loss[loss=0.4064, simple_loss=0.433, pruned_loss=0.1899, over 16404.00 frames. ], tot_loss[loss=0.2967, simple_loss=0.3566, pruned_loss=0.1184, over 3583072.63 frames. ], batch size: 94, lr: 3.79e-02, grad_scale: 8.0 2023-03-08 15:49:04,928 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2661, 2.8094, 3.9353, 2.4111, 3.2872, 3.5720, 3.7888, 3.7574], device='cuda:3'), covar=tensor([0.0307, 0.0433, 0.0107, 0.0648, 0.1223, 0.0046, 0.0170, 0.0199], device='cuda:3'), in_proj_covar=tensor([0.0078, 0.0085, 0.0052, 0.0087, 0.0158, 0.0060, 0.0071, 0.0077], device='cuda:3'), out_proj_covar=tensor([5.9425e-05, 6.3329e-05, 4.0693e-05, 6.6206e-05, 1.1833e-04, 3.9351e-05, 5.1658e-05, 5.8467e-05], device='cuda:3') 2023-03-08 15:49:12,564 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=6747.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:49:15,351 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.98 vs. limit=2.0 2023-03-08 15:49:57,573 INFO [train.py:898] (3/4) Epoch 2, batch 3150, loss[loss=0.2959, simple_loss=0.3606, pruned_loss=0.1156, over 17747.00 frames. ], tot_loss[loss=0.2972, simple_loss=0.3567, pruned_loss=0.1189, over 3580310.60 frames. ], batch size: 70, lr: 3.78e-02, grad_scale: 8.0 2023-03-08 15:50:04,928 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6791.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:50:36,055 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.441e+02 5.682e+02 6.955e+02 9.535e+02 1.991e+03, threshold=1.391e+03, percent-clipped=9.0 2023-03-08 15:50:37,910 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6818.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 15:50:43,422 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6823.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:50:46,902 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6826.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:50:57,112 INFO [train.py:898] (3/4) Epoch 2, batch 3200, loss[loss=0.3421, simple_loss=0.383, pruned_loss=0.1506, over 16170.00 frames. ], tot_loss[loss=0.2959, simple_loss=0.3557, pruned_loss=0.1181, over 3574011.75 frames. ], batch size: 94, lr: 3.77e-02, grad_scale: 8.0 2023-03-08 15:51:02,040 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=6839.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:51:26,110 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6859.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:51:35,092 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6867.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:51:43,490 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=6874.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:51:48,480 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6878.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 15:51:56,397 INFO [train.py:898] (3/4) Epoch 2, batch 3250, loss[loss=0.2645, simple_loss=0.3261, pruned_loss=0.1014, over 18511.00 frames. ], tot_loss[loss=0.2949, simple_loss=0.3548, pruned_loss=0.1175, over 3579331.58 frames. ], batch size: 47, lr: 3.76e-02, grad_scale: 8.0 2023-03-08 15:52:13,146 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=2.01 vs. limit=2.0 2023-03-08 15:52:22,339 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=6907.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:52:25,371 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6909.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:52:31,964 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=6915.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:52:34,070 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.634e+02 5.392e+02 6.712e+02 8.838e+02 2.461e+03, threshold=1.342e+03, percent-clipped=3.0 2023-03-08 15:52:40,451 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6922.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:52:54,799 INFO [train.py:898] (3/4) Epoch 2, batch 3300, loss[loss=0.3003, simple_loss=0.3606, pruned_loss=0.12, over 18299.00 frames. ], tot_loss[loss=0.2959, simple_loss=0.3557, pruned_loss=0.118, over 3576436.43 frames. ], batch size: 49, lr: 3.75e-02, grad_scale: 8.0 2023-03-08 15:53:00,142 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6939.0, num_to_drop=1, layers_to_drop={3} 2023-03-08 15:53:02,311 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0459, 4.9996, 2.1397, 4.7615, 4.9431, 2.2284, 4.2612, 3.5997], device='cuda:3'), covar=tensor([0.0053, 0.0195, 0.1760, 0.0202, 0.0073, 0.1474, 0.0436, 0.0708], device='cuda:3'), in_proj_covar=tensor([0.0065, 0.0066, 0.0146, 0.0096, 0.0057, 0.0131, 0.0129, 0.0120], device='cuda:3'), out_proj_covar=tensor([6.9438e-05, 8.7664e-05, 1.4622e-04, 1.0170e-04, 6.1453e-05, 1.3596e-04, 1.3788e-04, 1.3541e-04], device='cuda:3') 2023-03-08 15:53:35,749 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.0817, 4.4019, 4.7187, 4.1634, 3.2084, 2.4450, 4.5662, 4.9514], device='cuda:3'), covar=tensor([0.1019, 0.0273, 0.0048, 0.0163, 0.0740, 0.1076, 0.0113, 0.0027], device='cuda:3'), in_proj_covar=tensor([0.0093, 0.0044, 0.0040, 0.0068, 0.0104, 0.0111, 0.0070, 0.0035], device='cuda:3'), out_proj_covar=tensor([1.4015e-04, 8.0552e-05, 5.9948e-05, 1.0096e-04, 1.4545e-04, 1.5924e-04, 1.0421e-04, 5.2826e-05], device='cuda:3') 2023-03-08 15:53:36,602 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=6970.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:53:36,849 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6970.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:53:54,223 INFO [train.py:898] (3/4) Epoch 2, batch 3350, loss[loss=0.3091, simple_loss=0.3653, pruned_loss=0.1265, over 18487.00 frames. ], tot_loss[loss=0.2948, simple_loss=0.3554, pruned_loss=0.1171, over 3584393.94 frames. ], batch size: 51, lr: 3.74e-02, grad_scale: 8.0 2023-03-08 15:53:54,407 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6985.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:54:22,126 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7381, 4.4712, 4.8777, 4.6341, 4.5306, 4.4879, 4.9011, 4.9656], device='cuda:3'), covar=tensor([0.0076, 0.0124, 0.0101, 0.0091, 0.0091, 0.0114, 0.0103, 0.0106], device='cuda:3'), in_proj_covar=tensor([0.0060, 0.0048, 0.0045, 0.0055, 0.0051, 0.0060, 0.0053, 0.0045], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0001, 0.0001, 0.0001, 0.0001, 0.0002, 0.0001, 0.0001], device='cuda:3') 2023-03-08 15:54:32,863 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.449e+02 5.349e+02 6.526e+02 8.040e+02 1.400e+03, threshold=1.305e+03, percent-clipped=1.0 2023-03-08 15:54:33,336 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1776, 4.1533, 4.0732, 3.9973, 2.3312, 3.9745, 3.3631, 2.4202], device='cuda:3'), covar=tensor([0.0118, 0.0096, 0.0122, 0.0124, 0.1205, 0.0090, 0.0147, 0.0865], device='cuda:3'), in_proj_covar=tensor([0.0068, 0.0062, 0.0057, 0.0054, 0.0113, 0.0055, 0.0052, 0.0106], device='cuda:3'), out_proj_covar=tensor([6.6481e-05, 5.9751e-05, 5.7452e-05, 5.5341e-05, 1.0917e-04, 5.3015e-05, 5.9096e-05, 1.0528e-04], device='cuda:3') 2023-03-08 15:54:39,949 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1567, 4.8400, 4.9914, 4.7381, 4.5648, 4.8602, 4.1249, 4.7056], device='cuda:3'), covar=tensor([0.0415, 0.0527, 0.0306, 0.0264, 0.0570, 0.0324, 0.1118, 0.0368], device='cuda:3'), in_proj_covar=tensor([0.0088, 0.0115, 0.0102, 0.0088, 0.0111, 0.0111, 0.0147, 0.0102], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-08 15:54:53,424 INFO [train.py:898] (3/4) Epoch 2, batch 3400, loss[loss=0.3052, simple_loss=0.3517, pruned_loss=0.1294, over 18500.00 frames. ], tot_loss[loss=0.2958, simple_loss=0.3564, pruned_loss=0.1176, over 3584320.46 frames. ], batch size: 44, lr: 3.73e-02, grad_scale: 8.0 2023-03-08 15:54:57,836 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.70 vs. limit=5.0 2023-03-08 15:55:04,151 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7116, 3.0563, 1.5538, 4.1364, 2.8129, 4.2726, 2.1849, 3.8197], device='cuda:3'), covar=tensor([0.0387, 0.1117, 0.1976, 0.0230, 0.0999, 0.0074, 0.1373, 0.0294], device='cuda:3'), in_proj_covar=tensor([0.0083, 0.0122, 0.0123, 0.0073, 0.0114, 0.0058, 0.0118, 0.0106], device='cuda:3'), out_proj_covar=tensor([1.0747e-04, 1.4099e-04, 1.3937e-04, 1.0915e-04, 1.3604e-04, 6.9233e-05, 1.3253e-04, 1.1973e-04], device='cuda:3') 2023-03-08 15:55:09,254 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7048.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 15:55:46,505 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-08 15:55:52,764 INFO [train.py:898] (3/4) Epoch 2, batch 3450, loss[loss=0.2881, simple_loss=0.3592, pruned_loss=0.1085, over 18491.00 frames. ], tot_loss[loss=0.2955, simple_loss=0.3563, pruned_loss=0.1173, over 3581944.92 frames. ], batch size: 53, lr: 3.72e-02, grad_scale: 8.0 2023-03-08 15:56:16,497 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([1.6386, 4.3346, 4.4593, 3.9548, 3.1782, 2.2603, 3.9795, 4.6037], device='cuda:3'), covar=tensor([0.1217, 0.0170, 0.0052, 0.0174, 0.0745, 0.1157, 0.0261, 0.0035], device='cuda:3'), in_proj_covar=tensor([0.0093, 0.0043, 0.0039, 0.0070, 0.0104, 0.0110, 0.0071, 0.0035], device='cuda:3'), out_proj_covar=tensor([1.4118e-04, 8.0019e-05, 6.0315e-05, 1.0358e-04, 1.4686e-04, 1.5886e-04, 1.0537e-04, 5.3586e-05], device='cuda:3') 2023-03-08 15:56:20,898 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7109.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 15:56:30,820 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.505e+02 5.757e+02 6.768e+02 8.417e+02 2.561e+03, threshold=1.354e+03, percent-clipped=6.0 2023-03-08 15:56:32,157 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7118.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 15:56:37,422 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7123.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:56:51,288 INFO [train.py:898] (3/4) Epoch 2, batch 3500, loss[loss=0.2608, simple_loss=0.3325, pruned_loss=0.09454, over 18399.00 frames. ], tot_loss[loss=0.2952, simple_loss=0.356, pruned_loss=0.1173, over 3578344.23 frames. ], batch size: 52, lr: 3.71e-02, grad_scale: 8.0 2023-03-08 15:57:23,506 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7163.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:57:26,595 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=7166.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:57:31,687 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=7171.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:57:46,604 INFO [train.py:898] (3/4) Epoch 2, batch 3550, loss[loss=0.3086, simple_loss=0.3717, pruned_loss=0.1228, over 17991.00 frames. ], tot_loss[loss=0.2936, simple_loss=0.3544, pruned_loss=0.1164, over 3584894.70 frames. ], batch size: 65, lr: 3.71e-02, grad_scale: 8.0 2023-03-08 15:57:56,630 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.73 vs. limit=2.0 2023-03-08 15:58:20,679 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.75 vs. limit=2.0 2023-03-08 15:58:22,322 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.147e+02 5.640e+02 6.762e+02 8.858e+02 1.958e+03, threshold=1.352e+03, percent-clipped=2.0 2023-03-08 15:58:30,247 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7224.0, num_to_drop=1, layers_to_drop={3} 2023-03-08 15:58:40,532 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7234.0, num_to_drop=1, layers_to_drop={3} 2023-03-08 15:58:41,454 INFO [train.py:898] (3/4) Epoch 2, batch 3600, loss[loss=0.2954, simple_loss=0.3465, pruned_loss=0.1221, over 18150.00 frames. ], tot_loss[loss=0.2935, simple_loss=0.354, pruned_loss=0.1165, over 3580600.98 frames. ], batch size: 44, lr: 3.70e-02, grad_scale: 8.0 2023-03-08 15:59:12,939 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7265.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 15:59:14,314 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.85 vs. limit=5.0 2023-03-08 15:59:46,465 INFO [train.py:898] (3/4) Epoch 3, batch 0, loss[loss=0.3138, simple_loss=0.3711, pruned_loss=0.1282, over 17157.00 frames. ], tot_loss[loss=0.3138, simple_loss=0.3711, pruned_loss=0.1282, over 17157.00 frames. ], batch size: 78, lr: 3.51e-02, grad_scale: 8.0 2023-03-08 15:59:46,466 INFO [train.py:923] (3/4) Computing validation loss 2023-03-08 15:59:58,149 INFO [train.py:932] (3/4) Epoch 3, validation: loss=0.2228, simple_loss=0.3215, pruned_loss=0.06204, over 944034.00 frames. 2023-03-08 15:59:58,150 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19451MB 2023-03-08 16:00:05,133 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7275.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:00:08,781 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9868, 2.7067, 2.3857, 2.6650, 2.6528, 2.3377, 2.1052, 3.0821], device='cuda:3'), covar=tensor([0.0182, 0.0179, 0.0562, 0.0197, 0.0317, 0.0380, 0.0473, 0.0250], device='cuda:3'), in_proj_covar=tensor([0.0036, 0.0037, 0.0041, 0.0044, 0.0036, 0.0054, 0.0063, 0.0036], device='cuda:3'), out_proj_covar=tensor([4.9089e-05, 5.1390e-05, 6.3332e-05, 5.9273e-05, 5.1091e-05, 7.8556e-05, 9.2781e-05, 5.7193e-05], device='cuda:3') 2023-03-08 16:00:16,783 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7285.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:00:35,606 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1276, 2.7455, 2.4802, 2.7096, 2.7460, 2.4019, 2.1304, 3.2470], device='cuda:3'), covar=tensor([0.0110, 0.0161, 0.0410, 0.0172, 0.0221, 0.0276, 0.0360, 0.0230], device='cuda:3'), in_proj_covar=tensor([0.0035, 0.0036, 0.0040, 0.0043, 0.0035, 0.0053, 0.0061, 0.0035], device='cuda:3'), out_proj_covar=tensor([4.7490e-05, 5.0252e-05, 6.1384e-05, 5.8067e-05, 5.0042e-05, 7.7572e-05, 8.9600e-05, 5.5482e-05], device='cuda:3') 2023-03-08 16:00:41,380 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2387, 5.8248, 5.3859, 5.5132, 5.1677, 5.5403, 5.9569, 5.7370], device='cuda:3'), covar=tensor([0.1160, 0.0463, 0.0349, 0.0602, 0.1725, 0.0571, 0.0430, 0.0524], device='cuda:3'), in_proj_covar=tensor([0.0269, 0.0229, 0.0177, 0.0234, 0.0335, 0.0242, 0.0231, 0.0213], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-08 16:00:54,864 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.526e+02 5.312e+02 6.673e+02 8.366e+02 1.427e+03, threshold=1.335e+03, percent-clipped=1.0 2023-03-08 16:00:57,231 INFO [train.py:898] (3/4) Epoch 3, batch 50, loss[loss=0.3114, simple_loss=0.3795, pruned_loss=0.1217, over 17932.00 frames. ], tot_loss[loss=0.2889, simple_loss=0.351, pruned_loss=0.1135, over 807124.41 frames. ], batch size: 65, lr: 3.50e-02, grad_scale: 8.0 2023-03-08 16:01:14,311 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=7333.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:01:18,050 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7336.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:01:49,479 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7653, 2.5856, 2.4453, 2.3774, 2.5030, 2.0441, 1.7791, 2.9272], device='cuda:3'), covar=tensor([0.0195, 0.0238, 0.0379, 0.0249, 0.0350, 0.0379, 0.0509, 0.0372], device='cuda:3'), in_proj_covar=tensor([0.0036, 0.0037, 0.0041, 0.0045, 0.0035, 0.0056, 0.0063, 0.0037], device='cuda:3'), out_proj_covar=tensor([4.8990e-05, 5.2025e-05, 6.3390e-05, 6.0732e-05, 5.0636e-05, 8.1396e-05, 9.3730e-05, 5.8339e-05], device='cuda:3') 2023-03-08 16:01:55,591 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-08 16:01:57,066 INFO [train.py:898] (3/4) Epoch 3, batch 100, loss[loss=0.2291, simple_loss=0.2942, pruned_loss=0.08202, over 18254.00 frames. ], tot_loss[loss=0.2899, simple_loss=0.3509, pruned_loss=0.1145, over 1399162.19 frames. ], batch size: 45, lr: 3.49e-02, grad_scale: 8.0 2023-03-08 16:02:39,744 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7404.0, num_to_drop=1, layers_to_drop={3} 2023-03-08 16:02:43,176 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7407.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:02:54,249 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.382e+02 5.439e+02 6.621e+02 7.725e+02 1.513e+03, threshold=1.324e+03, percent-clipped=3.0 2023-03-08 16:02:56,568 INFO [train.py:898] (3/4) Epoch 3, batch 150, loss[loss=0.2813, simple_loss=0.3538, pruned_loss=0.1044, over 18491.00 frames. ], tot_loss[loss=0.2857, simple_loss=0.3483, pruned_loss=0.1116, over 1902390.89 frames. ], batch size: 51, lr: 3.48e-02, grad_scale: 8.0 2023-03-08 16:03:10,766 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.91 vs. limit=2.0 2023-03-08 16:03:54,935 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7468.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:03:55,727 INFO [train.py:898] (3/4) Epoch 3, batch 200, loss[loss=0.2567, simple_loss=0.3237, pruned_loss=0.09487, over 18508.00 frames. ], tot_loss[loss=0.2844, simple_loss=0.3473, pruned_loss=0.1108, over 2281204.64 frames. ], batch size: 47, lr: 3.47e-02, grad_scale: 8.0 2023-03-08 16:04:01,023 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.59 vs. limit=2.0 2023-03-08 16:04:08,115 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2913, 3.1481, 1.4040, 3.7738, 2.6073, 3.8875, 1.9087, 3.2344], device='cuda:3'), covar=tensor([0.0408, 0.0918, 0.1866, 0.0345, 0.0941, 0.0097, 0.1418, 0.0410], device='cuda:3'), in_proj_covar=tensor([0.0093, 0.0129, 0.0130, 0.0081, 0.0119, 0.0061, 0.0126, 0.0112], device='cuda:3'), out_proj_covar=tensor([1.2411e-04, 1.5176e-04, 1.4948e-04, 1.2409e-04, 1.4509e-04, 7.5160e-05, 1.4420e-04, 1.3172e-04], device='cuda:3') 2023-03-08 16:04:51,798 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.337e+02 5.369e+02 6.599e+02 8.256e+02 1.502e+03, threshold=1.320e+03, percent-clipped=3.0 2023-03-08 16:04:54,103 INFO [train.py:898] (3/4) Epoch 3, batch 250, loss[loss=0.2467, simple_loss=0.3195, pruned_loss=0.08699, over 18291.00 frames. ], tot_loss[loss=0.2865, simple_loss=0.3487, pruned_loss=0.1122, over 2570178.49 frames. ], batch size: 49, lr: 3.47e-02, grad_scale: 8.0 2023-03-08 16:04:54,427 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7519.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 16:05:10,756 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7534.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 16:05:47,514 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7565.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:05:51,576 INFO [train.py:898] (3/4) Epoch 3, batch 300, loss[loss=0.2897, simple_loss=0.3592, pruned_loss=0.1101, over 18394.00 frames. ], tot_loss[loss=0.2853, simple_loss=0.3473, pruned_loss=0.1116, over 2804118.74 frames. ], batch size: 52, lr: 3.46e-02, grad_scale: 8.0 2023-03-08 16:06:06,073 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=7582.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 16:06:43,936 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=7613.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:06:48,194 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.523e+02 5.233e+02 6.862e+02 8.770e+02 1.930e+03, threshold=1.372e+03, percent-clipped=4.0 2023-03-08 16:06:50,539 INFO [train.py:898] (3/4) Epoch 3, batch 350, loss[loss=0.298, simple_loss=0.3648, pruned_loss=0.1156, over 18497.00 frames. ], tot_loss[loss=0.2864, simple_loss=0.3483, pruned_loss=0.1123, over 2968123.05 frames. ], batch size: 51, lr: 3.45e-02, grad_scale: 8.0 2023-03-08 16:06:56,791 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7624.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:07:04,462 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7631.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:07:31,391 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.3721, 4.2234, 2.4873, 4.1330, 4.4340, 2.4662, 3.9085, 3.4742], device='cuda:3'), covar=tensor([0.0075, 0.0405, 0.1354, 0.0217, 0.0060, 0.1296, 0.0403, 0.0701], device='cuda:3'), in_proj_covar=tensor([0.0070, 0.0081, 0.0152, 0.0109, 0.0062, 0.0145, 0.0140, 0.0137], device='cuda:3'), out_proj_covar=tensor([7.8210e-05, 1.1021e-04, 1.5836e-04, 1.1893e-04, 6.9588e-05, 1.5519e-04, 1.5447e-04, 1.5783e-04], device='cuda:3') 2023-03-08 16:07:45,727 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.65 vs. limit=2.0 2023-03-08 16:07:49,575 INFO [train.py:898] (3/4) Epoch 3, batch 400, loss[loss=0.2853, simple_loss=0.3497, pruned_loss=0.1105, over 18300.00 frames. ], tot_loss[loss=0.2848, simple_loss=0.3477, pruned_loss=0.111, over 3105766.41 frames. ], batch size: 54, lr: 3.44e-02, grad_scale: 8.0 2023-03-08 16:08:07,811 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7685.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:08:29,785 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7704.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 16:08:47,268 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.778e+02 4.774e+02 5.790e+02 7.165e+02 2.375e+03, threshold=1.158e+03, percent-clipped=2.0 2023-03-08 16:08:48,411 INFO [train.py:898] (3/4) Epoch 3, batch 450, loss[loss=0.274, simple_loss=0.3402, pruned_loss=0.1039, over 18282.00 frames. ], tot_loss[loss=0.2832, simple_loss=0.3465, pruned_loss=0.11, over 3210434.09 frames. ], batch size: 49, lr: 3.44e-02, grad_scale: 8.0 2023-03-08 16:09:00,656 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.38 vs. limit=5.0 2023-03-08 16:09:15,843 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7743.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:09:25,469 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([1.7948, 4.0195, 4.2347, 3.8453, 2.5895, 2.2287, 3.4985, 4.3762], device='cuda:3'), covar=tensor([0.1044, 0.0232, 0.0048, 0.0170, 0.0864, 0.1027, 0.0268, 0.0034], device='cuda:3'), in_proj_covar=tensor([0.0100, 0.0048, 0.0044, 0.0077, 0.0112, 0.0118, 0.0078, 0.0038], device='cuda:3'), out_proj_covar=tensor([1.5432e-04, 8.8859e-05, 7.0539e-05, 1.1822e-04, 1.6227e-04, 1.7524e-04, 1.1845e-04, 5.9790e-05], device='cuda:3') 2023-03-08 16:09:26,353 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=7752.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 16:09:38,575 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7763.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:09:46,280 INFO [train.py:898] (3/4) Epoch 3, batch 500, loss[loss=0.3032, simple_loss=0.3556, pruned_loss=0.1254, over 17045.00 frames. ], tot_loss[loss=0.2834, simple_loss=0.3466, pruned_loss=0.1101, over 3306611.82 frames. ], batch size: 78, lr: 3.43e-02, grad_scale: 8.0 2023-03-08 16:10:19,979 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.95 vs. limit=2.0 2023-03-08 16:10:26,143 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7802.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:10:28,500 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7804.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:10:45,273 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.544e+02 5.001e+02 6.338e+02 8.345e+02 2.616e+03, threshold=1.268e+03, percent-clipped=10.0 2023-03-08 16:10:46,500 INFO [train.py:898] (3/4) Epoch 3, batch 550, loss[loss=0.245, simple_loss=0.3108, pruned_loss=0.08957, over 18391.00 frames. ], tot_loss[loss=0.2835, simple_loss=0.3466, pruned_loss=0.1102, over 3362402.16 frames. ], batch size: 42, lr: 3.42e-02, grad_scale: 8.0 2023-03-08 16:10:46,773 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7819.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 16:11:10,751 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4555, 4.2660, 2.3719, 3.8909, 4.4925, 2.2194, 3.5670, 3.2623], device='cuda:3'), covar=tensor([0.0079, 0.0282, 0.1423, 0.0314, 0.0058, 0.1496, 0.0580, 0.0765], device='cuda:3'), in_proj_covar=tensor([0.0070, 0.0083, 0.0155, 0.0113, 0.0062, 0.0144, 0.0143, 0.0137], device='cuda:3'), out_proj_covar=tensor([7.8684e-05, 1.1319e-04, 1.6376e-04, 1.2525e-04, 7.0310e-05, 1.5630e-04, 1.5828e-04, 1.5887e-04], device='cuda:3') 2023-03-08 16:11:37,609 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7863.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:11:41,868 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=7867.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:11:44,719 INFO [train.py:898] (3/4) Epoch 3, batch 600, loss[loss=0.2568, simple_loss=0.3256, pruned_loss=0.094, over 18414.00 frames. ], tot_loss[loss=0.2832, simple_loss=0.3463, pruned_loss=0.11, over 3412796.49 frames. ], batch size: 48, lr: 3.41e-02, grad_scale: 8.0 2023-03-08 16:11:49,172 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([1.7211, 4.4408, 4.7772, 4.2337, 2.9076, 2.3734, 4.2975, 4.7485], device='cuda:3'), covar=tensor([0.1308, 0.0406, 0.0042, 0.0198, 0.0967, 0.1103, 0.0179, 0.0034], device='cuda:3'), in_proj_covar=tensor([0.0099, 0.0048, 0.0044, 0.0077, 0.0111, 0.0118, 0.0078, 0.0038], device='cuda:3'), out_proj_covar=tensor([1.5277e-04, 8.9571e-05, 6.9997e-05, 1.1928e-04, 1.6219e-04, 1.7532e-04, 1.1881e-04, 6.0530e-05], device='cuda:3') 2023-03-08 16:12:42,169 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.737e+02 5.313e+02 6.628e+02 8.433e+02 2.088e+03, threshold=1.326e+03, percent-clipped=7.0 2023-03-08 16:12:43,851 INFO [train.py:898] (3/4) Epoch 3, batch 650, loss[loss=0.2791, simple_loss=0.3468, pruned_loss=0.1057, over 18398.00 frames. ], tot_loss[loss=0.2848, simple_loss=0.3478, pruned_loss=0.111, over 3446125.12 frames. ], batch size: 52, lr: 3.40e-02, grad_scale: 8.0 2023-03-08 16:12:59,327 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7931.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:13:42,825 INFO [train.py:898] (3/4) Epoch 3, batch 700, loss[loss=0.235, simple_loss=0.2966, pruned_loss=0.08665, over 18146.00 frames. ], tot_loss[loss=0.2849, simple_loss=0.348, pruned_loss=0.1109, over 3472305.61 frames. ], batch size: 44, lr: 3.40e-02, grad_scale: 8.0 2023-03-08 16:13:56,305 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=7979.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:13:57,411 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7980.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:14:45,130 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.136e+02 5.602e+02 6.674e+02 7.907e+02 1.811e+03, threshold=1.335e+03, percent-clipped=1.0 2023-03-08 16:14:46,260 INFO [train.py:898] (3/4) Epoch 3, batch 750, loss[loss=0.3261, simple_loss=0.3871, pruned_loss=0.1325, over 18276.00 frames. ], tot_loss[loss=0.2831, simple_loss=0.3464, pruned_loss=0.1099, over 3495857.03 frames. ], batch size: 57, lr: 3.39e-02, grad_scale: 8.0 2023-03-08 16:14:50,140 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4047, 3.1413, 2.3757, 2.8004, 3.1707, 2.3462, 2.0986, 3.1884], device='cuda:3'), covar=tensor([0.0149, 0.0208, 0.0537, 0.0243, 0.0178, 0.0471, 0.0591, 0.0223], device='cuda:3'), in_proj_covar=tensor([0.0040, 0.0043, 0.0046, 0.0052, 0.0040, 0.0061, 0.0074, 0.0042], device='cuda:3'), out_proj_covar=tensor([5.6367e-05, 6.1762e-05, 7.3528e-05, 7.2321e-05, 5.7495e-05, 9.1477e-05, 1.1386e-04, 6.6579e-05], device='cuda:3') 2023-03-08 16:15:37,095 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.66 vs. limit=5.0 2023-03-08 16:15:38,392 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=8063.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:15:45,002 INFO [train.py:898] (3/4) Epoch 3, batch 800, loss[loss=0.2703, simple_loss=0.3323, pruned_loss=0.1042, over 18498.00 frames. ], tot_loss[loss=0.2834, simple_loss=0.3464, pruned_loss=0.1102, over 3503449.14 frames. ], batch size: 47, lr: 3.38e-02, grad_scale: 8.0 2023-03-08 16:15:50,109 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5209, 3.0976, 3.4999, 3.5613, 2.2898, 3.7081, 3.6751, 2.3805], device='cuda:3'), covar=tensor([0.0153, 0.0242, 0.0099, 0.0116, 0.1120, 0.0088, 0.0129, 0.0870], device='cuda:3'), in_proj_covar=tensor([0.0077, 0.0079, 0.0061, 0.0062, 0.0126, 0.0065, 0.0063, 0.0124], device='cuda:3'), out_proj_covar=tensor([7.4264e-05, 7.7230e-05, 6.4560e-05, 6.2580e-05, 1.2214e-04, 6.3174e-05, 7.0929e-05, 1.2345e-04], device='cuda:3') 2023-03-08 16:16:21,869 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=8099.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:16:35,652 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=8111.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:16:40,306 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1749, 5.4448, 3.0860, 4.9644, 4.9171, 5.3440, 4.9643, 2.4786], device='cuda:3'), covar=tensor([0.0185, 0.0036, 0.0609, 0.0071, 0.0067, 0.0041, 0.0133, 0.1200], device='cuda:3'), in_proj_covar=tensor([0.0052, 0.0035, 0.0065, 0.0039, 0.0041, 0.0036, 0.0044, 0.0078], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0003], device='cuda:3') 2023-03-08 16:16:43,312 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.074e+02 5.395e+02 6.687e+02 8.203e+02 1.547e+03, threshold=1.337e+03, percent-clipped=3.0 2023-03-08 16:16:44,214 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9666, 5.2662, 3.0195, 4.7343, 4.5912, 5.1301, 4.7770, 2.5997], device='cuda:3'), covar=tensor([0.0245, 0.0043, 0.0716, 0.0096, 0.0103, 0.0062, 0.0142, 0.1278], device='cuda:3'), in_proj_covar=tensor([0.0052, 0.0035, 0.0065, 0.0039, 0.0041, 0.0036, 0.0044, 0.0078], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0003], device='cuda:3') 2023-03-08 16:16:44,973 INFO [train.py:898] (3/4) Epoch 3, batch 850, loss[loss=0.2829, simple_loss=0.3559, pruned_loss=0.1049, over 18484.00 frames. ], tot_loss[loss=0.283, simple_loss=0.3464, pruned_loss=0.1099, over 3523925.22 frames. ], batch size: 53, lr: 3.37e-02, grad_scale: 8.0 2023-03-08 16:17:31,429 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=8158.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:17:35,477 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=5.62 vs. limit=5.0 2023-03-08 16:17:43,854 INFO [train.py:898] (3/4) Epoch 3, batch 900, loss[loss=0.2288, simple_loss=0.295, pruned_loss=0.08128, over 18436.00 frames. ], tot_loss[loss=0.2819, simple_loss=0.3456, pruned_loss=0.1091, over 3542382.91 frames. ], batch size: 43, lr: 3.37e-02, grad_scale: 8.0 2023-03-08 16:18:42,377 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.447e+02 5.230e+02 6.588e+02 8.350e+02 1.625e+03, threshold=1.318e+03, percent-clipped=3.0 2023-03-08 16:18:43,457 INFO [train.py:898] (3/4) Epoch 3, batch 950, loss[loss=0.2852, simple_loss=0.3539, pruned_loss=0.1082, over 18291.00 frames. ], tot_loss[loss=0.2809, simple_loss=0.3451, pruned_loss=0.1084, over 3565454.83 frames. ], batch size: 54, lr: 3.36e-02, grad_scale: 8.0 2023-03-08 16:18:56,772 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=2.12 vs. limit=2.0 2023-03-08 16:19:25,590 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6413, 5.4329, 5.4162, 4.9516, 5.0334, 5.2718, 4.7271, 5.2266], device='cuda:3'), covar=tensor([0.0260, 0.0237, 0.0177, 0.0256, 0.0411, 0.0219, 0.0815, 0.0200], device='cuda:3'), in_proj_covar=tensor([0.0098, 0.0128, 0.0108, 0.0099, 0.0123, 0.0124, 0.0167, 0.0111], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003], device='cuda:3') 2023-03-08 16:19:38,220 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=8265.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:19:42,226 INFO [train.py:898] (3/4) Epoch 3, batch 1000, loss[loss=0.272, simple_loss=0.3379, pruned_loss=0.1031, over 18355.00 frames. ], tot_loss[loss=0.2796, simple_loss=0.344, pruned_loss=0.1076, over 3574523.71 frames. ], batch size: 55, lr: 3.35e-02, grad_scale: 8.0 2023-03-08 16:19:55,536 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=8280.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:20:40,392 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.475e+02 5.034e+02 6.200e+02 8.356e+02 1.633e+03, threshold=1.240e+03, percent-clipped=2.0 2023-03-08 16:20:41,586 INFO [train.py:898] (3/4) Epoch 3, batch 1050, loss[loss=0.3044, simple_loss=0.3694, pruned_loss=0.1197, over 18089.00 frames. ], tot_loss[loss=0.2782, simple_loss=0.3428, pruned_loss=0.1068, over 3592064.79 frames. ], batch size: 62, lr: 3.34e-02, grad_scale: 8.0 2023-03-08 16:20:49,605 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=8326.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:20:51,658 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=8328.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:21:40,129 INFO [train.py:898] (3/4) Epoch 3, batch 1100, loss[loss=0.2397, simple_loss=0.315, pruned_loss=0.08221, over 18269.00 frames. ], tot_loss[loss=0.2793, simple_loss=0.3433, pruned_loss=0.1076, over 3581602.01 frames. ], batch size: 47, lr: 3.34e-02, grad_scale: 8.0 2023-03-08 16:22:16,107 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=8399.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:22:38,117 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.304e+02 5.764e+02 6.854e+02 8.468e+02 1.709e+03, threshold=1.371e+03, percent-clipped=7.0 2023-03-08 16:22:39,142 INFO [train.py:898] (3/4) Epoch 3, batch 1150, loss[loss=0.2929, simple_loss=0.363, pruned_loss=0.1114, over 18483.00 frames. ], tot_loss[loss=0.2772, simple_loss=0.3415, pruned_loss=0.1064, over 3581463.62 frames. ], batch size: 53, lr: 3.33e-02, grad_scale: 8.0 2023-03-08 16:23:12,021 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=8447.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:23:26,403 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=8458.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:23:38,494 INFO [train.py:898] (3/4) Epoch 3, batch 1200, loss[loss=0.2448, simple_loss=0.3019, pruned_loss=0.09385, over 18566.00 frames. ], tot_loss[loss=0.2782, simple_loss=0.3421, pruned_loss=0.1072, over 3582571.11 frames. ], batch size: 45, lr: 3.32e-02, grad_scale: 8.0 2023-03-08 16:23:48,470 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1949, 3.3026, 2.4515, 2.6469, 2.9272, 2.2499, 2.3411, 3.0952], device='cuda:3'), covar=tensor([0.0234, 0.0171, 0.0534, 0.0178, 0.0235, 0.0351, 0.0339, 0.0320], device='cuda:3'), in_proj_covar=tensor([0.0040, 0.0043, 0.0046, 0.0053, 0.0041, 0.0062, 0.0072, 0.0044], device='cuda:3'), out_proj_covar=tensor([5.8192e-05, 6.3027e-05, 7.3956e-05, 7.5996e-05, 6.0253e-05, 9.5174e-05, 1.1299e-04, 6.9822e-05], device='cuda:3') 2023-03-08 16:24:22,304 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=8506.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:24:36,207 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.117e+02 5.075e+02 6.913e+02 8.856e+02 3.555e+03, threshold=1.383e+03, percent-clipped=10.0 2023-03-08 16:24:37,338 INFO [train.py:898] (3/4) Epoch 3, batch 1250, loss[loss=0.3244, simple_loss=0.3808, pruned_loss=0.134, over 16049.00 frames. ], tot_loss[loss=0.279, simple_loss=0.3425, pruned_loss=0.1077, over 3579209.17 frames. ], batch size: 94, lr: 3.31e-02, grad_scale: 8.0 2023-03-08 16:25:04,147 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.93 vs. limit=2.0 2023-03-08 16:25:16,678 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3036, 4.8050, 5.2370, 5.2118, 5.0551, 5.9074, 5.2803, 5.3033], device='cuda:3'), covar=tensor([0.0724, 0.0654, 0.0603, 0.0574, 0.1333, 0.0579, 0.0566, 0.1297], device='cuda:3'), in_proj_covar=tensor([0.0195, 0.0146, 0.0150, 0.0144, 0.0202, 0.0205, 0.0144, 0.0206], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 16:25:36,292 INFO [train.py:898] (3/4) Epoch 3, batch 1300, loss[loss=0.2775, simple_loss=0.3482, pruned_loss=0.1034, over 17787.00 frames. ], tot_loss[loss=0.2791, simple_loss=0.3428, pruned_loss=0.1077, over 3579512.87 frames. ], batch size: 70, lr: 3.31e-02, grad_scale: 8.0 2023-03-08 16:25:51,957 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.78 vs. limit=2.0 2023-03-08 16:26:14,738 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.87 vs. limit=2.0 2023-03-08 16:26:23,846 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.92 vs. limit=2.0 2023-03-08 16:26:35,144 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.185e+02 5.027e+02 6.070e+02 7.698e+02 1.470e+03, threshold=1.214e+03, percent-clipped=1.0 2023-03-08 16:26:36,229 INFO [train.py:898] (3/4) Epoch 3, batch 1350, loss[loss=0.2648, simple_loss=0.333, pruned_loss=0.09836, over 18294.00 frames. ], tot_loss[loss=0.2799, simple_loss=0.3436, pruned_loss=0.1081, over 3569320.82 frames. ], batch size: 49, lr: 3.30e-02, grad_scale: 8.0 2023-03-08 16:26:38,817 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=8621.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:27:35,145 INFO [train.py:898] (3/4) Epoch 3, batch 1400, loss[loss=0.3041, simple_loss=0.3651, pruned_loss=0.1215, over 18305.00 frames. ], tot_loss[loss=0.2787, simple_loss=0.3424, pruned_loss=0.1075, over 3575289.42 frames. ], batch size: 54, lr: 3.29e-02, grad_scale: 8.0 2023-03-08 16:27:56,027 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0858, 5.2577, 3.1939, 4.9076, 4.7274, 5.1204, 5.0821, 2.7313], device='cuda:3'), covar=tensor([0.0190, 0.0043, 0.0584, 0.0074, 0.0072, 0.0054, 0.0089, 0.1168], device='cuda:3'), in_proj_covar=tensor([0.0050, 0.0035, 0.0065, 0.0040, 0.0043, 0.0036, 0.0043, 0.0077], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0003], device='cuda:3') 2023-03-08 16:28:32,216 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.853e+02 5.972e+02 7.488e+02 8.927e+02 1.917e+03, threshold=1.498e+03, percent-clipped=2.0 2023-03-08 16:28:33,367 INFO [train.py:898] (3/4) Epoch 3, batch 1450, loss[loss=0.2878, simple_loss=0.3558, pruned_loss=0.11, over 18489.00 frames. ], tot_loss[loss=0.2771, simple_loss=0.3412, pruned_loss=0.1065, over 3590351.33 frames. ], batch size: 53, lr: 3.29e-02, grad_scale: 8.0 2023-03-08 16:29:21,480 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=8760.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:29:32,474 INFO [train.py:898] (3/4) Epoch 3, batch 1500, loss[loss=0.2638, simple_loss=0.3355, pruned_loss=0.09608, over 18367.00 frames. ], tot_loss[loss=0.276, simple_loss=0.3411, pruned_loss=0.1055, over 3601955.78 frames. ], batch size: 55, lr: 3.28e-02, grad_scale: 8.0 2023-03-08 16:29:36,927 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.2832, 4.5959, 5.2743, 4.2558, 3.3508, 2.4487, 4.8458, 5.2122], device='cuda:3'), covar=tensor([0.1023, 0.0267, 0.0032, 0.0191, 0.0700, 0.1090, 0.0166, 0.0023], device='cuda:3'), in_proj_covar=tensor([0.0107, 0.0059, 0.0047, 0.0086, 0.0124, 0.0132, 0.0089, 0.0041], device='cuda:3'), out_proj_covar=tensor([1.6888e-04, 1.0757e-04, 7.4862e-05, 1.3670e-04, 1.8336e-04, 1.9861e-04, 1.3986e-04, 6.6392e-05], device='cuda:3') 2023-03-08 16:30:04,087 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9307, 4.8153, 4.9220, 4.7572, 4.7742, 5.5229, 4.9733, 4.9566], device='cuda:3'), covar=tensor([0.0728, 0.0674, 0.0596, 0.0550, 0.1272, 0.0633, 0.0619, 0.1290], device='cuda:3'), in_proj_covar=tensor([0.0198, 0.0153, 0.0155, 0.0143, 0.0206, 0.0207, 0.0141, 0.0207], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 16:30:29,513 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.789e+02 5.215e+02 6.562e+02 8.622e+02 2.061e+03, threshold=1.312e+03, percent-clipped=5.0 2023-03-08 16:30:30,746 INFO [train.py:898] (3/4) Epoch 3, batch 1550, loss[loss=0.2309, simple_loss=0.2998, pruned_loss=0.081, over 17583.00 frames. ], tot_loss[loss=0.2768, simple_loss=0.3415, pruned_loss=0.106, over 3592915.00 frames. ], batch size: 39, lr: 3.27e-02, grad_scale: 8.0 2023-03-08 16:30:34,109 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=8821.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:30:58,876 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.96 vs. limit=2.0 2023-03-08 16:31:28,701 INFO [train.py:898] (3/4) Epoch 3, batch 1600, loss[loss=0.2545, simple_loss=0.3234, pruned_loss=0.09276, over 18481.00 frames. ], tot_loss[loss=0.2784, simple_loss=0.3429, pruned_loss=0.107, over 3588945.90 frames. ], batch size: 51, lr: 3.26e-02, grad_scale: 8.0 2023-03-08 16:31:47,972 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.49 vs. limit=2.0 2023-03-08 16:32:26,430 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.256e+02 5.119e+02 6.262e+02 8.326e+02 2.097e+03, threshold=1.252e+03, percent-clipped=4.0 2023-03-08 16:32:27,485 INFO [train.py:898] (3/4) Epoch 3, batch 1650, loss[loss=0.2801, simple_loss=0.3433, pruned_loss=0.1084, over 17725.00 frames. ], tot_loss[loss=0.2765, simple_loss=0.3415, pruned_loss=0.1057, over 3601979.09 frames. ], batch size: 70, lr: 3.26e-02, grad_scale: 8.0 2023-03-08 16:32:30,226 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=8921.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:32:38,228 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=8927.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:32:56,598 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.08 vs. limit=5.0 2023-03-08 16:33:26,961 INFO [train.py:898] (3/4) Epoch 3, batch 1700, loss[loss=0.278, simple_loss=0.3451, pruned_loss=0.1054, over 18444.00 frames. ], tot_loss[loss=0.2764, simple_loss=0.3414, pruned_loss=0.1057, over 3593012.64 frames. ], batch size: 59, lr: 3.25e-02, grad_scale: 8.0 2023-03-08 16:33:27,141 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=8969.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:33:37,034 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.2129, 5.3624, 2.6267, 4.9977, 5.1263, 5.3141, 5.2001, 2.8853], device='cuda:3'), covar=tensor([0.0158, 0.0039, 0.0715, 0.0064, 0.0048, 0.0058, 0.0083, 0.0914], device='cuda:3'), in_proj_covar=tensor([0.0052, 0.0037, 0.0067, 0.0042, 0.0044, 0.0038, 0.0046, 0.0079], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003], device='cuda:3') 2023-03-08 16:33:44,115 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.3723, 5.4216, 2.9298, 5.0241, 5.1984, 5.4057, 5.3177, 2.9097], device='cuda:3'), covar=tensor([0.0146, 0.0037, 0.0620, 0.0074, 0.0048, 0.0053, 0.0073, 0.0920], device='cuda:3'), in_proj_covar=tensor([0.0052, 0.0037, 0.0066, 0.0041, 0.0044, 0.0038, 0.0046, 0.0079], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003], device='cuda:3') 2023-03-08 16:33:46,592 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8463, 3.9868, 4.8162, 3.0206, 4.0858, 3.4951, 3.7416, 2.0532], device='cuda:3'), covar=tensor([0.0542, 0.0290, 0.0041, 0.0391, 0.0313, 0.0906, 0.0437, 0.1179], device='cuda:3'), in_proj_covar=tensor([0.0101, 0.0096, 0.0056, 0.0088, 0.0102, 0.0150, 0.0073, 0.0123], device='cuda:3'), out_proj_covar=tensor([1.0173e-04, 9.6933e-05, 5.5542e-05, 8.8180e-05, 1.0645e-04, 1.4551e-04, 8.6821e-05, 1.1781e-04], device='cuda:3') 2023-03-08 16:33:48,849 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=8986.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:33:51,277 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=8988.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:34:19,991 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=9013.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:34:25,794 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.324e+02 5.263e+02 6.332e+02 7.938e+02 1.916e+03, threshold=1.266e+03, percent-clipped=4.0 2023-03-08 16:34:26,992 INFO [train.py:898] (3/4) Epoch 3, batch 1750, loss[loss=0.2291, simple_loss=0.2985, pruned_loss=0.07984, over 18244.00 frames. ], tot_loss[loss=0.274, simple_loss=0.3395, pruned_loss=0.1043, over 3605015.08 frames. ], batch size: 45, lr: 3.24e-02, grad_scale: 8.0 2023-03-08 16:35:00,481 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=9047.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:35:25,207 INFO [train.py:898] (3/4) Epoch 3, batch 1800, loss[loss=0.2772, simple_loss=0.3437, pruned_loss=0.1054, over 16300.00 frames. ], tot_loss[loss=0.2746, simple_loss=0.3399, pruned_loss=0.1047, over 3602649.67 frames. ], batch size: 94, lr: 3.24e-02, grad_scale: 8.0 2023-03-08 16:35:31,948 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=9074.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:36:20,787 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=9116.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:36:22,837 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.642e+02 5.266e+02 6.349e+02 9.123e+02 1.669e+03, threshold=1.270e+03, percent-clipped=4.0 2023-03-08 16:36:24,042 INFO [train.py:898] (3/4) Epoch 3, batch 1850, loss[loss=0.2857, simple_loss=0.3529, pruned_loss=0.1092, over 17718.00 frames. ], tot_loss[loss=0.2744, simple_loss=0.3394, pruned_loss=0.1047, over 3594759.07 frames. ], batch size: 70, lr: 3.23e-02, grad_scale: 8.0 2023-03-08 16:37:22,898 INFO [train.py:898] (3/4) Epoch 3, batch 1900, loss[loss=0.3383, simple_loss=0.3793, pruned_loss=0.1487, over 18259.00 frames. ], tot_loss[loss=0.2761, simple_loss=0.3405, pruned_loss=0.1059, over 3582314.31 frames. ], batch size: 49, lr: 3.22e-02, grad_scale: 8.0 2023-03-08 16:37:38,381 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6870, 4.2591, 2.1203, 4.2577, 4.7089, 2.2490, 3.7635, 3.5150], device='cuda:3'), covar=tensor([0.0065, 0.0707, 0.1708, 0.0332, 0.0066, 0.1577, 0.0576, 0.0782], device='cuda:3'), in_proj_covar=tensor([0.0071, 0.0102, 0.0161, 0.0130, 0.0066, 0.0149, 0.0157, 0.0150], device='cuda:3'), out_proj_covar=tensor([8.7103e-05, 1.4620e-04, 1.8020e-04, 1.5266e-04, 8.1886e-05, 1.7274e-04, 1.8157e-04, 1.8278e-04], device='cuda:3') 2023-03-08 16:38:20,292 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.837e+02 5.068e+02 6.125e+02 7.856e+02 1.754e+03, threshold=1.225e+03, percent-clipped=6.0 2023-03-08 16:38:21,534 INFO [train.py:898] (3/4) Epoch 3, batch 1950, loss[loss=0.2341, simple_loss=0.3001, pruned_loss=0.08404, over 18431.00 frames. ], tot_loss[loss=0.2745, simple_loss=0.3391, pruned_loss=0.105, over 3576995.64 frames. ], batch size: 43, lr: 3.22e-02, grad_scale: 8.0 2023-03-08 16:39:21,347 INFO [train.py:898] (3/4) Epoch 3, batch 2000, loss[loss=0.2971, simple_loss=0.3654, pruned_loss=0.1144, over 18018.00 frames. ], tot_loss[loss=0.2758, simple_loss=0.3402, pruned_loss=0.1057, over 3573709.32 frames. ], batch size: 65, lr: 3.21e-02, grad_scale: 8.0 2023-03-08 16:39:38,151 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=9283.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:40:01,705 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([1.9340, 3.7841, 4.5327, 3.7926, 2.5912, 2.2792, 4.0366, 4.5615], device='cuda:3'), covar=tensor([0.1072, 0.0311, 0.0066, 0.0196, 0.0941, 0.1166, 0.0241, 0.0033], device='cuda:3'), in_proj_covar=tensor([0.0110, 0.0064, 0.0047, 0.0087, 0.0126, 0.0130, 0.0092, 0.0046], device='cuda:3'), out_proj_covar=tensor([1.7411e-04, 1.1562e-04, 7.6205e-05, 1.3962e-04, 1.8787e-04, 1.9843e-04, 1.4746e-04, 7.1933e-05], device='cuda:3') 2023-03-08 16:40:19,550 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.362e+02 5.058e+02 6.498e+02 8.656e+02 1.894e+03, threshold=1.300e+03, percent-clipped=4.0 2023-03-08 16:40:20,762 INFO [train.py:898] (3/4) Epoch 3, batch 2050, loss[loss=0.2916, simple_loss=0.3575, pruned_loss=0.1129, over 18034.00 frames. ], tot_loss[loss=0.2758, simple_loss=0.3403, pruned_loss=0.1057, over 3575322.39 frames. ], batch size: 65, lr: 3.20e-02, grad_scale: 8.0 2023-03-08 16:40:48,488 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=9342.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:41:15,275 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.51 vs. limit=2.0 2023-03-08 16:41:15,938 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8355, 4.7389, 4.9347, 4.6164, 4.5406, 4.6392, 5.1287, 5.1614], device='cuda:3'), covar=tensor([0.0072, 0.0101, 0.0097, 0.0088, 0.0111, 0.0116, 0.0095, 0.0097], device='cuda:3'), in_proj_covar=tensor([0.0058, 0.0046, 0.0043, 0.0053, 0.0050, 0.0061, 0.0053, 0.0048], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 16:41:20,249 INFO [train.py:898] (3/4) Epoch 3, batch 2100, loss[loss=0.2689, simple_loss=0.3459, pruned_loss=0.09593, over 18362.00 frames. ], tot_loss[loss=0.2739, simple_loss=0.3392, pruned_loss=0.1043, over 3588911.40 frames. ], batch size: 55, lr: 3.20e-02, grad_scale: 8.0 2023-03-08 16:41:20,513 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=9369.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:41:43,516 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7215, 4.6205, 4.6959, 4.6160, 4.4966, 4.5139, 4.9863, 5.0113], device='cuda:3'), covar=tensor([0.0081, 0.0102, 0.0145, 0.0081, 0.0115, 0.0136, 0.0103, 0.0104], device='cuda:3'), in_proj_covar=tensor([0.0060, 0.0048, 0.0044, 0.0055, 0.0051, 0.0063, 0.0054, 0.0050], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 16:42:13,630 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=9414.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:42:15,906 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=9416.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:42:17,748 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.310e+02 4.905e+02 6.085e+02 7.510e+02 1.544e+03, threshold=1.217e+03, percent-clipped=2.0 2023-03-08 16:42:18,946 INFO [train.py:898] (3/4) Epoch 3, batch 2150, loss[loss=0.266, simple_loss=0.3266, pruned_loss=0.1027, over 18286.00 frames. ], tot_loss[loss=0.2737, simple_loss=0.3392, pruned_loss=0.1041, over 3589551.55 frames. ], batch size: 49, lr: 3.19e-02, grad_scale: 8.0 2023-03-08 16:43:02,371 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.14 vs. limit=5.0 2023-03-08 16:43:13,045 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=9464.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:43:18,562 INFO [train.py:898] (3/4) Epoch 3, batch 2200, loss[loss=0.3013, simple_loss=0.3724, pruned_loss=0.1151, over 17997.00 frames. ], tot_loss[loss=0.2737, simple_loss=0.3393, pruned_loss=0.104, over 3570536.44 frames. ], batch size: 65, lr: 3.18e-02, grad_scale: 8.0 2023-03-08 16:43:25,676 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=9475.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:43:25,753 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9634, 4.5366, 4.8265, 3.1751, 4.1939, 3.6026, 3.7786, 2.4283], device='cuda:3'), covar=tensor([0.0500, 0.0223, 0.0036, 0.0368, 0.0338, 0.0896, 0.0580, 0.1063], device='cuda:3'), in_proj_covar=tensor([0.0107, 0.0106, 0.0060, 0.0093, 0.0111, 0.0159, 0.0082, 0.0131], device='cuda:3'), out_proj_covar=tensor([1.0839e-04, 1.0838e-04, 6.0761e-05, 9.5650e-05, 1.1697e-04, 1.5461e-04, 9.7468e-05, 1.2648e-04], device='cuda:3') 2023-03-08 16:44:16,066 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.011e+02 5.021e+02 6.106e+02 7.632e+02 1.377e+03, threshold=1.221e+03, percent-clipped=3.0 2023-03-08 16:44:17,244 INFO [train.py:898] (3/4) Epoch 3, batch 2250, loss[loss=0.2409, simple_loss=0.296, pruned_loss=0.09291, over 18459.00 frames. ], tot_loss[loss=0.2729, simple_loss=0.3385, pruned_loss=0.1036, over 3584000.99 frames. ], batch size: 43, lr: 3.18e-02, grad_scale: 8.0 2023-03-08 16:45:16,188 INFO [train.py:898] (3/4) Epoch 3, batch 2300, loss[loss=0.2677, simple_loss=0.3274, pruned_loss=0.104, over 18265.00 frames. ], tot_loss[loss=0.2715, simple_loss=0.3373, pruned_loss=0.1029, over 3591258.88 frames. ], batch size: 47, lr: 3.17e-02, grad_scale: 8.0 2023-03-08 16:45:32,536 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=9583.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:45:33,797 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=9584.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:45:55,792 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-08 16:46:14,661 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.768e+02 4.849e+02 6.155e+02 7.957e+02 1.846e+03, threshold=1.231e+03, percent-clipped=3.0 2023-03-08 16:46:15,816 INFO [train.py:898] (3/4) Epoch 3, batch 2350, loss[loss=0.341, simple_loss=0.3864, pruned_loss=0.1478, over 12634.00 frames. ], tot_loss[loss=0.2709, simple_loss=0.3366, pruned_loss=0.1026, over 3576361.12 frames. ], batch size: 129, lr: 3.16e-02, grad_scale: 8.0 2023-03-08 16:46:29,867 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=9631.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:46:29,972 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4701, 6.0199, 5.4648, 6.0047, 5.4176, 5.8955, 6.1871, 5.9624], device='cuda:3'), covar=tensor([0.0942, 0.0572, 0.0334, 0.0551, 0.1741, 0.0471, 0.0385, 0.0629], device='cuda:3'), in_proj_covar=tensor([0.0325, 0.0269, 0.0209, 0.0277, 0.0402, 0.0286, 0.0291, 0.0254], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0005, 0.0004, 0.0004, 0.0003], device='cuda:3') 2023-03-08 16:46:42,600 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=9642.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:46:46,154 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=9645.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:46:59,897 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([1.9151, 4.3365, 5.0533, 4.1231, 2.6395, 2.4175, 4.5955, 5.2710], device='cuda:3'), covar=tensor([0.1417, 0.0404, 0.0074, 0.0262, 0.1075, 0.1266, 0.0221, 0.0021], device='cuda:3'), in_proj_covar=tensor([0.0111, 0.0070, 0.0048, 0.0093, 0.0129, 0.0134, 0.0096, 0.0046], device='cuda:3'), out_proj_covar=tensor([1.7754e-04, 1.2666e-04, 7.9684e-05, 1.4911e-04, 1.9447e-04, 2.0547e-04, 1.5416e-04, 7.3958e-05], device='cuda:3') 2023-03-08 16:47:00,921 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=9658.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:47:07,957 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1541, 4.5000, 1.8900, 4.6033, 5.1582, 2.2020, 3.7510, 3.8069], device='cuda:3'), covar=tensor([0.0050, 0.0584, 0.1737, 0.0331, 0.0051, 0.1483, 0.0652, 0.0706], device='cuda:3'), in_proj_covar=tensor([0.0073, 0.0102, 0.0157, 0.0131, 0.0065, 0.0147, 0.0158, 0.0149], device='cuda:3'), out_proj_covar=tensor([9.0010e-05, 1.4855e-04, 1.7851e-04, 1.5561e-04, 8.1062e-05, 1.7307e-04, 1.8548e-04, 1.8550e-04], device='cuda:3') 2023-03-08 16:47:14,967 INFO [train.py:898] (3/4) Epoch 3, batch 2400, loss[loss=0.2586, simple_loss=0.3268, pruned_loss=0.09519, over 18385.00 frames. ], tot_loss[loss=0.2711, simple_loss=0.3368, pruned_loss=0.1027, over 3583116.61 frames. ], batch size: 50, lr: 3.16e-02, grad_scale: 8.0 2023-03-08 16:47:15,237 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=9669.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:47:39,282 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=9690.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:48:11,385 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=9717.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:48:13,581 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.796e+02 5.501e+02 6.533e+02 8.034e+02 1.544e+03, threshold=1.307e+03, percent-clipped=2.0 2023-03-08 16:48:13,607 INFO [train.py:898] (3/4) Epoch 3, batch 2450, loss[loss=0.2503, simple_loss=0.3204, pruned_loss=0.09012, over 18501.00 frames. ], tot_loss[loss=0.2712, simple_loss=0.3368, pruned_loss=0.1028, over 3586368.07 frames. ], batch size: 47, lr: 3.15e-02, grad_scale: 8.0 2023-03-08 16:48:14,073 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=9719.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:48:41,439 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=9743.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:48:53,813 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7589, 3.3924, 3.0166, 2.7973, 3.5128, 2.6205, 2.2330, 3.5610], device='cuda:3'), covar=tensor([0.0044, 0.0100, 0.0176, 0.0113, 0.0065, 0.0168, 0.0270, 0.0112], device='cuda:3'), in_proj_covar=tensor([0.0043, 0.0046, 0.0047, 0.0064, 0.0045, 0.0070, 0.0081, 0.0046], device='cuda:3'), out_proj_covar=tensor([6.1135e-05, 7.0774e-05, 7.5804e-05, 9.5315e-05, 6.7183e-05, 1.0855e-04, 1.3320e-04, 7.1619e-05], device='cuda:3') 2023-03-08 16:49:05,475 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8552, 1.9225, 4.4792, 3.1641, 3.6269, 4.6913, 4.3954, 4.4876], device='cuda:3'), covar=tensor([0.0217, 0.0714, 0.0152, 0.0437, 0.1021, 0.0029, 0.0153, 0.0118], device='cuda:3'), in_proj_covar=tensor([0.0100, 0.0126, 0.0070, 0.0127, 0.0207, 0.0076, 0.0103, 0.0099], device='cuda:3'), out_proj_covar=tensor([8.2790e-05, 1.0373e-04, 5.8631e-05, 9.9068e-05, 1.6694e-04, 5.5263e-05, 8.5821e-05, 7.9654e-05], device='cuda:3') 2023-03-08 16:49:11,284 INFO [train.py:898] (3/4) Epoch 3, batch 2500, loss[loss=0.2688, simple_loss=0.3393, pruned_loss=0.09915, over 18455.00 frames. ], tot_loss[loss=0.2716, simple_loss=0.3373, pruned_loss=0.1029, over 3591994.47 frames. ], batch size: 59, lr: 3.14e-02, grad_scale: 8.0 2023-03-08 16:49:13,122 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=9770.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:49:35,935 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6088, 3.5058, 1.5304, 4.2795, 2.9606, 4.4969, 1.8412, 3.8347], device='cuda:3'), covar=tensor([0.0376, 0.0855, 0.1669, 0.0233, 0.0882, 0.0073, 0.1561, 0.0340], device='cuda:3'), in_proj_covar=tensor([0.0114, 0.0160, 0.0145, 0.0106, 0.0141, 0.0075, 0.0143, 0.0130], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 16:49:46,217 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4451, 3.4237, 2.7103, 2.5747, 3.1532, 2.2982, 2.3934, 3.3698], device='cuda:3'), covar=tensor([0.0083, 0.0086, 0.0194, 0.0202, 0.0120, 0.0276, 0.0296, 0.0202], device='cuda:3'), in_proj_covar=tensor([0.0044, 0.0047, 0.0049, 0.0067, 0.0046, 0.0072, 0.0083, 0.0047], device='cuda:3'), out_proj_covar=tensor([6.3263e-05, 7.2610e-05, 7.8986e-05, 1.0049e-04, 6.9985e-05, 1.1167e-04, 1.3636e-04, 7.3851e-05], device='cuda:3') 2023-03-08 16:49:51,965 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=9804.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:50:09,362 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.524e+02 5.803e+02 7.293e+02 8.388e+02 1.924e+03, threshold=1.459e+03, percent-clipped=4.0 2023-03-08 16:50:09,388 INFO [train.py:898] (3/4) Epoch 3, batch 2550, loss[loss=0.2765, simple_loss=0.347, pruned_loss=0.1031, over 18315.00 frames. ], tot_loss[loss=0.2722, simple_loss=0.3379, pruned_loss=0.1033, over 3601461.10 frames. ], batch size: 54, lr: 3.14e-02, grad_scale: 8.0 2023-03-08 16:51:06,868 INFO [train.py:898] (3/4) Epoch 3, batch 2600, loss[loss=0.2267, simple_loss=0.2966, pruned_loss=0.0784, over 18280.00 frames. ], tot_loss[loss=0.2709, simple_loss=0.3366, pruned_loss=0.1025, over 3608935.53 frames. ], batch size: 47, lr: 3.13e-02, grad_scale: 8.0 2023-03-08 16:51:51,930 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2971, 5.2203, 4.2598, 5.2023, 5.1956, 4.7505, 5.1784, 4.6129], device='cuda:3'), covar=tensor([0.0539, 0.0523, 0.2201, 0.0823, 0.0532, 0.0439, 0.0461, 0.0692], device='cuda:3'), in_proj_covar=tensor([0.0240, 0.0251, 0.0408, 0.0204, 0.0189, 0.0231, 0.0254, 0.0296], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0005, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004], device='cuda:3') 2023-03-08 16:52:01,181 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.2167, 4.0909, 4.2456, 4.1108, 3.8826, 3.9098, 4.4438, 4.4819], device='cuda:3'), covar=tensor([0.0090, 0.0123, 0.0110, 0.0110, 0.0149, 0.0131, 0.0139, 0.0115], device='cuda:3'), in_proj_covar=tensor([0.0061, 0.0048, 0.0047, 0.0057, 0.0053, 0.0064, 0.0054, 0.0050], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 16:52:05,271 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.617e+02 5.195e+02 6.356e+02 7.709e+02 1.531e+03, threshold=1.271e+03, percent-clipped=3.0 2023-03-08 16:52:05,297 INFO [train.py:898] (3/4) Epoch 3, batch 2650, loss[loss=0.3252, simple_loss=0.3806, pruned_loss=0.1349, over 18468.00 frames. ], tot_loss[loss=0.2718, simple_loss=0.3374, pruned_loss=0.1031, over 3591047.52 frames. ], batch size: 59, lr: 3.13e-02, grad_scale: 8.0 2023-03-08 16:52:31,453 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=9940.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:53:03,795 INFO [train.py:898] (3/4) Epoch 3, batch 2700, loss[loss=0.2627, simple_loss=0.3368, pruned_loss=0.09434, over 18364.00 frames. ], tot_loss[loss=0.2701, simple_loss=0.3359, pruned_loss=0.1022, over 3592263.32 frames. ], batch size: 56, lr: 3.12e-02, grad_scale: 8.0 2023-03-08 16:54:01,200 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=10014.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:54:06,411 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.004e+02 5.454e+02 7.297e+02 9.817e+02 2.367e+03, threshold=1.459e+03, percent-clipped=11.0 2023-03-08 16:54:06,436 INFO [train.py:898] (3/4) Epoch 3, batch 2750, loss[loss=0.291, simple_loss=0.3599, pruned_loss=0.111, over 18486.00 frames. ], tot_loss[loss=0.2699, simple_loss=0.3358, pruned_loss=0.102, over 3594438.63 frames. ], batch size: 53, lr: 3.11e-02, grad_scale: 8.0 2023-03-08 16:54:07,222 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.85 vs. limit=2.0 2023-03-08 16:54:15,722 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.96 vs. limit=2.0 2023-03-08 16:54:19,197 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=5.12 vs. limit=5.0 2023-03-08 16:54:31,116 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10039.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:54:57,032 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10062.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 16:55:04,433 INFO [train.py:898] (3/4) Epoch 3, batch 2800, loss[loss=0.2692, simple_loss=0.3366, pruned_loss=0.1009, over 18493.00 frames. ], tot_loss[loss=0.2687, simple_loss=0.3353, pruned_loss=0.1011, over 3586986.27 frames. ], batch size: 51, lr: 3.11e-02, grad_scale: 8.0 2023-03-08 16:55:05,826 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=10070.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:55:40,524 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=10099.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:55:41,702 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10100.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 16:56:00,974 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=2.05 vs. limit=2.0 2023-03-08 16:56:01,593 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=10118.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:56:02,531 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.762e+02 4.646e+02 5.829e+02 7.162e+02 1.376e+03, threshold=1.166e+03, percent-clipped=0.0 2023-03-08 16:56:02,556 INFO [train.py:898] (3/4) Epoch 3, batch 2850, loss[loss=0.2493, simple_loss=0.3236, pruned_loss=0.08754, over 18377.00 frames. ], tot_loss[loss=0.2684, simple_loss=0.3351, pruned_loss=0.1009, over 3591812.58 frames. ], batch size: 50, lr: 3.10e-02, grad_scale: 8.0 2023-03-08 16:56:04,166 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0000, 3.7754, 4.7528, 3.2917, 4.0535, 3.3116, 3.6065, 2.3683], device='cuda:3'), covar=tensor([0.0410, 0.0313, 0.0039, 0.0324, 0.0310, 0.0922, 0.0653, 0.1053], device='cuda:3'), in_proj_covar=tensor([0.0112, 0.0112, 0.0059, 0.0097, 0.0121, 0.0169, 0.0096, 0.0138], device='cuda:3'), out_proj_covar=tensor([1.1362e-04, 1.1497e-04, 6.2098e-05, 1.0003e-04, 1.2708e-04, 1.6616e-04, 1.1156e-04, 1.3484e-04], device='cuda:3') 2023-03-08 16:56:07,240 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10123.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 16:56:26,753 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5832, 5.6205, 3.3103, 5.2626, 5.2434, 5.6041, 5.4972, 2.9340], device='cuda:3'), covar=tensor([0.0103, 0.0038, 0.0598, 0.0052, 0.0055, 0.0050, 0.0070, 0.0952], device='cuda:3'), in_proj_covar=tensor([0.0055, 0.0041, 0.0072, 0.0046, 0.0049, 0.0041, 0.0051, 0.0083], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003], device='cuda:3') 2023-03-08 16:56:57,432 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2308, 1.8235, 4.3050, 2.5685, 3.2551, 4.5031, 4.1453, 3.9549], device='cuda:3'), covar=tensor([0.0259, 0.0717, 0.0117, 0.0560, 0.1058, 0.0036, 0.0164, 0.0158], device='cuda:3'), in_proj_covar=tensor([0.0103, 0.0129, 0.0068, 0.0131, 0.0212, 0.0078, 0.0109, 0.0102], device='cuda:3'), out_proj_covar=tensor([8.5759e-05, 1.0622e-04, 5.9580e-05, 1.0164e-04, 1.7103e-04, 5.6620e-05, 9.1020e-05, 8.1804e-05], device='cuda:3') 2023-03-08 16:57:01,501 INFO [train.py:898] (3/4) Epoch 3, batch 2900, loss[loss=0.3256, simple_loss=0.3707, pruned_loss=0.1402, over 12290.00 frames. ], tot_loss[loss=0.2691, simple_loss=0.3358, pruned_loss=0.1012, over 3580070.74 frames. ], batch size: 129, lr: 3.09e-02, grad_scale: 4.0 2023-03-08 16:57:04,243 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5644, 3.4823, 3.0241, 2.9622, 3.1056, 2.7831, 2.9181, 3.4608], device='cuda:3'), covar=tensor([0.0046, 0.0097, 0.0183, 0.0139, 0.0235, 0.0231, 0.0209, 0.0110], device='cuda:3'), in_proj_covar=tensor([0.0042, 0.0047, 0.0047, 0.0069, 0.0048, 0.0072, 0.0084, 0.0046], device='cuda:3'), out_proj_covar=tensor([6.0165e-05, 7.2132e-05, 7.6141e-05, 1.0482e-04, 7.4624e-05, 1.1304e-04, 1.3797e-04, 7.3519e-05], device='cuda:3') 2023-03-08 16:57:41,597 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4166, 4.2010, 5.2207, 4.2247, 3.0599, 2.6316, 4.6596, 5.2528], device='cuda:3'), covar=tensor([0.1014, 0.0565, 0.0037, 0.0228, 0.0878, 0.1033, 0.0188, 0.0024], device='cuda:3'), in_proj_covar=tensor([0.0116, 0.0081, 0.0050, 0.0098, 0.0133, 0.0137, 0.0100, 0.0048], device='cuda:3'), out_proj_covar=tensor([1.8546e-04, 1.4656e-04, 8.1730e-05, 1.5807e-04, 2.0202e-04, 2.0995e-04, 1.5989e-04, 7.7226e-05], device='cuda:3') 2023-03-08 16:58:00,061 INFO [train.py:898] (3/4) Epoch 3, batch 2950, loss[loss=0.284, simple_loss=0.3489, pruned_loss=0.1095, over 18407.00 frames. ], tot_loss[loss=0.2685, simple_loss=0.3349, pruned_loss=0.101, over 3584082.23 frames. ], batch size: 52, lr: 3.09e-02, grad_scale: 4.0 2023-03-08 16:58:01,191 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.457e+02 4.931e+02 6.360e+02 7.565e+02 1.819e+03, threshold=1.272e+03, percent-clipped=6.0 2023-03-08 16:58:04,335 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.59 vs. limit=2.0 2023-03-08 16:58:25,981 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=10240.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 16:58:49,597 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.91 vs. limit=5.0 2023-03-08 16:58:59,061 INFO [train.py:898] (3/4) Epoch 3, batch 3000, loss[loss=0.2307, simple_loss=0.3104, pruned_loss=0.07551, over 18397.00 frames. ], tot_loss[loss=0.2678, simple_loss=0.3344, pruned_loss=0.1006, over 3570886.65 frames. ], batch size: 50, lr: 3.08e-02, grad_scale: 4.0 2023-03-08 16:58:59,061 INFO [train.py:923] (3/4) Computing validation loss 2023-03-08 16:59:10,963 INFO [train.py:932] (3/4) Epoch 3, validation: loss=0.2015, simple_loss=0.3025, pruned_loss=0.05021, over 944034.00 frames. 2023-03-08 16:59:10,963 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-08 16:59:11,312 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7834, 3.3973, 1.5509, 4.2639, 2.8598, 4.6523, 2.2899, 4.0394], device='cuda:3'), covar=tensor([0.0409, 0.0943, 0.1679, 0.0283, 0.0975, 0.0057, 0.1258, 0.0282], device='cuda:3'), in_proj_covar=tensor([0.0124, 0.0165, 0.0149, 0.0113, 0.0145, 0.0077, 0.0146, 0.0133], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 16:59:34,256 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=10288.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:00:03,677 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=10314.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:00:08,967 INFO [train.py:898] (3/4) Epoch 3, batch 3050, loss[loss=0.2685, simple_loss=0.3422, pruned_loss=0.09733, over 18342.00 frames. ], tot_loss[loss=0.2673, simple_loss=0.3337, pruned_loss=0.1004, over 3571130.51 frames. ], batch size: 56, lr: 3.08e-02, grad_scale: 4.0 2023-03-08 17:00:10,063 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.678e+02 5.040e+02 6.054e+02 8.050e+02 1.536e+03, threshold=1.211e+03, percent-clipped=3.0 2023-03-08 17:00:19,264 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([1.7987, 4.1389, 5.2568, 3.9737, 2.8061, 2.4182, 4.5086, 5.0642], device='cuda:3'), covar=tensor([0.1271, 0.0529, 0.0031, 0.0285, 0.0922, 0.1125, 0.0207, 0.0024], device='cuda:3'), in_proj_covar=tensor([0.0115, 0.0084, 0.0050, 0.0101, 0.0133, 0.0137, 0.0100, 0.0048], device='cuda:3'), out_proj_covar=tensor([1.8453e-04, 1.4990e-04, 8.2540e-05, 1.6252e-04, 2.0135e-04, 2.1023e-04, 1.6146e-04, 7.7723e-05], device='cuda:3') 2023-03-08 17:00:59,201 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=10362.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:01:07,134 INFO [train.py:898] (3/4) Epoch 3, batch 3100, loss[loss=0.2225, simple_loss=0.3027, pruned_loss=0.07116, over 18305.00 frames. ], tot_loss[loss=0.267, simple_loss=0.3338, pruned_loss=0.1001, over 3591500.18 frames. ], batch size: 49, lr: 3.07e-02, grad_scale: 4.0 2023-03-08 17:01:38,272 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.50 vs. limit=2.0 2023-03-08 17:01:38,742 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=10395.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 17:01:43,406 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=10399.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:01:54,974 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10409.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:02:05,051 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=10418.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 17:02:05,995 INFO [train.py:898] (3/4) Epoch 3, batch 3150, loss[loss=0.218, simple_loss=0.2846, pruned_loss=0.07574, over 18247.00 frames. ], tot_loss[loss=0.2669, simple_loss=0.334, pruned_loss=0.09993, over 3588780.44 frames. ], batch size: 45, lr: 3.06e-02, grad_scale: 4.0 2023-03-08 17:02:07,190 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.149e+02 4.695e+02 5.685e+02 7.343e+02 1.972e+03, threshold=1.137e+03, percent-clipped=4.0 2023-03-08 17:02:40,174 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=10447.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:02:48,051 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10454.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:03:03,806 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.1562, 4.0416, 5.1211, 3.7376, 2.8588, 2.6810, 4.4786, 5.0845], device='cuda:3'), covar=tensor([0.1102, 0.0622, 0.0031, 0.0266, 0.0800, 0.0937, 0.0196, 0.0025], device='cuda:3'), in_proj_covar=tensor([0.0119, 0.0088, 0.0053, 0.0104, 0.0134, 0.0139, 0.0104, 0.0050], device='cuda:3'), out_proj_covar=tensor([1.9044e-04, 1.5629e-04, 8.6883e-05, 1.6729e-04, 2.0431e-04, 2.1359e-04, 1.6678e-04, 8.1092e-05], device='cuda:3') 2023-03-08 17:03:04,449 INFO [train.py:898] (3/4) Epoch 3, batch 3200, loss[loss=0.2685, simple_loss=0.3287, pruned_loss=0.1041, over 18410.00 frames. ], tot_loss[loss=0.2642, simple_loss=0.3315, pruned_loss=0.09851, over 3592074.34 frames. ], batch size: 48, lr: 3.06e-02, grad_scale: 8.0 2023-03-08 17:03:05,984 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10470.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:03:59,393 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10515.0, num_to_drop=1, layers_to_drop={3} 2023-03-08 17:04:03,374 INFO [train.py:898] (3/4) Epoch 3, batch 3250, loss[loss=0.2828, simple_loss=0.3516, pruned_loss=0.107, over 18094.00 frames. ], tot_loss[loss=0.2639, simple_loss=0.331, pruned_loss=0.09835, over 3575322.11 frames. ], batch size: 62, lr: 3.05e-02, grad_scale: 8.0 2023-03-08 17:04:04,539 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.357e+02 4.994e+02 6.141e+02 8.166e+02 2.371e+03, threshold=1.228e+03, percent-clipped=6.0 2023-03-08 17:05:01,931 INFO [train.py:898] (3/4) Epoch 3, batch 3300, loss[loss=0.2758, simple_loss=0.3454, pruned_loss=0.1031, over 18471.00 frames. ], tot_loss[loss=0.264, simple_loss=0.3315, pruned_loss=0.09831, over 3581177.27 frames. ], batch size: 59, lr: 3.05e-02, grad_scale: 8.0 2023-03-08 17:06:01,135 INFO [train.py:898] (3/4) Epoch 3, batch 3350, loss[loss=0.2356, simple_loss=0.307, pruned_loss=0.08216, over 18274.00 frames. ], tot_loss[loss=0.2644, simple_loss=0.3315, pruned_loss=0.09862, over 3565087.29 frames. ], batch size: 47, lr: 3.04e-02, grad_scale: 8.0 2023-03-08 17:06:02,282 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.520e+02 5.384e+02 6.304e+02 8.125e+02 1.835e+03, threshold=1.261e+03, percent-clipped=2.0 2023-03-08 17:06:19,379 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.2167, 5.4321, 2.6176, 5.1284, 5.0717, 5.3850, 5.2230, 2.6382], device='cuda:3'), covar=tensor([0.0157, 0.0047, 0.0811, 0.0056, 0.0077, 0.0074, 0.0096, 0.1091], device='cuda:3'), in_proj_covar=tensor([0.0058, 0.0042, 0.0076, 0.0049, 0.0051, 0.0043, 0.0054, 0.0086], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0002, 0.0003], device='cuda:3') 2023-03-08 17:06:36,348 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.55 vs. limit=2.0 2023-03-08 17:06:59,031 INFO [train.py:898] (3/4) Epoch 3, batch 3400, loss[loss=0.2765, simple_loss=0.3441, pruned_loss=0.1045, over 18227.00 frames. ], tot_loss[loss=0.265, simple_loss=0.3321, pruned_loss=0.09893, over 3568940.97 frames. ], batch size: 60, lr: 3.03e-02, grad_scale: 8.0 2023-03-08 17:07:13,292 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.74 vs. limit=2.0 2023-03-08 17:07:30,290 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=10695.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:07:56,766 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=10718.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 17:07:57,572 INFO [train.py:898] (3/4) Epoch 3, batch 3450, loss[loss=0.2894, simple_loss=0.3527, pruned_loss=0.113, over 18624.00 frames. ], tot_loss[loss=0.2648, simple_loss=0.3322, pruned_loss=0.09872, over 3579211.08 frames. ], batch size: 52, lr: 3.03e-02, grad_scale: 8.0 2023-03-08 17:07:58,711 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.826e+02 5.643e+02 6.682e+02 8.838e+02 1.430e+03, threshold=1.336e+03, percent-clipped=8.0 2023-03-08 17:07:59,212 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0543, 4.2601, 2.1195, 4.5928, 5.1670, 2.3961, 3.9006, 3.7372], device='cuda:3'), covar=tensor([0.0065, 0.0922, 0.1634, 0.0292, 0.0033, 0.1337, 0.0507, 0.0675], device='cuda:3'), in_proj_covar=tensor([0.0075, 0.0116, 0.0165, 0.0140, 0.0062, 0.0155, 0.0166, 0.0153], device='cuda:3'), out_proj_covar=tensor([9.7429e-05, 1.6890e-04, 1.9487e-04, 1.7269e-04, 8.1409e-05, 1.8955e-04, 2.0114e-04, 1.9572e-04], device='cuda:3') 2023-03-08 17:08:24,822 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=10743.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:08:26,192 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10744.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:08:47,169 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0710, 4.3658, 2.0247, 4.6434, 5.1633, 2.3440, 4.0174, 3.5422], device='cuda:3'), covar=tensor([0.0068, 0.0797, 0.1790, 0.0339, 0.0033, 0.1539, 0.0562, 0.0892], device='cuda:3'), in_proj_covar=tensor([0.0072, 0.0114, 0.0161, 0.0139, 0.0061, 0.0152, 0.0161, 0.0151], device='cuda:3'), out_proj_covar=tensor([9.4288e-05, 1.6571e-04, 1.9061e-04, 1.7135e-04, 8.0031e-05, 1.8570e-04, 1.9580e-04, 1.9245e-04], device='cuda:3') 2023-03-08 17:08:51,321 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=10765.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:08:52,436 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=10766.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 17:08:55,413 INFO [train.py:898] (3/4) Epoch 3, batch 3500, loss[loss=0.2733, simple_loss=0.3374, pruned_loss=0.1046, over 18406.00 frames. ], tot_loss[loss=0.2643, simple_loss=0.3319, pruned_loss=0.09834, over 3578301.74 frames. ], batch size: 48, lr: 3.02e-02, grad_scale: 8.0 2023-03-08 17:09:01,557 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10774.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:09:27,841 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=2.08 vs. limit=2.0 2023-03-08 17:09:36,609 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10805.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:09:41,058 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4311, 3.3428, 1.5654, 4.2699, 2.9462, 4.4280, 1.8836, 3.5000], device='cuda:3'), covar=tensor([0.0564, 0.0994, 0.1872, 0.0322, 0.1057, 0.0075, 0.1472, 0.0459], device='cuda:3'), in_proj_covar=tensor([0.0129, 0.0169, 0.0153, 0.0118, 0.0147, 0.0079, 0.0150, 0.0136], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 17:09:41,872 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=10810.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 17:09:51,966 INFO [train.py:898] (3/4) Epoch 3, batch 3550, loss[loss=0.2184, simple_loss=0.2857, pruned_loss=0.07557, over 18388.00 frames. ], tot_loss[loss=0.264, simple_loss=0.3318, pruned_loss=0.09806, over 3577551.41 frames. ], batch size: 42, lr: 3.02e-02, grad_scale: 8.0 2023-03-08 17:09:52,947 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.843e+02 4.878e+02 6.150e+02 7.630e+02 1.368e+03, threshold=1.230e+03, percent-clipped=1.0 2023-03-08 17:09:55,364 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5795, 5.0293, 5.1369, 4.9609, 4.7836, 4.9570, 4.2123, 4.8420], device='cuda:3'), covar=tensor([0.0218, 0.0381, 0.0214, 0.0209, 0.0481, 0.0216, 0.1221, 0.0306], device='cuda:3'), in_proj_covar=tensor([0.0108, 0.0146, 0.0128, 0.0117, 0.0145, 0.0140, 0.0203, 0.0128], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-08 17:10:09,075 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10835.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:10:22,121 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3106, 5.1425, 4.6882, 5.2025, 5.1929, 4.5472, 5.0868, 4.6028], device='cuda:3'), covar=tensor([0.0320, 0.0394, 0.1637, 0.0577, 0.0394, 0.0482, 0.0379, 0.0692], device='cuda:3'), in_proj_covar=tensor([0.0240, 0.0263, 0.0415, 0.0211, 0.0196, 0.0245, 0.0265, 0.0304], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0005, 0.0003, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-08 17:10:38,722 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-08 17:10:46,352 INFO [train.py:898] (3/4) Epoch 3, batch 3600, loss[loss=0.3217, simple_loss=0.3661, pruned_loss=0.1386, over 12285.00 frames. ], tot_loss[loss=0.2653, simple_loss=0.3327, pruned_loss=0.099, over 3572360.64 frames. ], batch size: 130, lr: 3.01e-02, grad_scale: 8.0 2023-03-08 17:10:59,679 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.0020, 3.7535, 4.7030, 4.1646, 2.9722, 2.5186, 4.3421, 4.9429], device='cuda:3'), covar=tensor([0.1175, 0.0709, 0.0046, 0.0196, 0.0890, 0.1159, 0.0227, 0.0029], device='cuda:3'), in_proj_covar=tensor([0.0120, 0.0094, 0.0055, 0.0105, 0.0138, 0.0143, 0.0109, 0.0051], device='cuda:3'), out_proj_covar=tensor([1.9377e-04, 1.6702e-04, 9.1463e-05, 1.7181e-04, 2.1201e-04, 2.2059e-04, 1.7590e-04, 8.3462e-05], device='cuda:3') 2023-03-08 17:11:03,844 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10885.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 17:11:51,263 INFO [train.py:898] (3/4) Epoch 4, batch 0, loss[loss=0.267, simple_loss=0.3398, pruned_loss=0.09705, over 18029.00 frames. ], tot_loss[loss=0.267, simple_loss=0.3398, pruned_loss=0.09705, over 18029.00 frames. ], batch size: 65, lr: 2.81e-02, grad_scale: 8.0 2023-03-08 17:11:51,263 INFO [train.py:923] (3/4) Computing validation loss 2023-03-08 17:12:03,130 INFO [train.py:932] (3/4) Epoch 4, validation: loss=0.2018, simple_loss=0.3032, pruned_loss=0.05022, over 944034.00 frames. 2023-03-08 17:12:03,132 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-08 17:12:22,935 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.062e+02 5.140e+02 6.071e+02 7.420e+02 1.697e+03, threshold=1.214e+03, percent-clipped=4.0 2023-03-08 17:12:25,476 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3065, 1.7626, 4.7243, 2.7206, 3.5789, 4.8395, 4.3830, 4.5323], device='cuda:3'), covar=tensor([0.0266, 0.0771, 0.0119, 0.0545, 0.0944, 0.0028, 0.0157, 0.0122], device='cuda:3'), in_proj_covar=tensor([0.0110, 0.0143, 0.0074, 0.0143, 0.0224, 0.0079, 0.0116, 0.0108], device='cuda:3'), out_proj_covar=tensor([9.1670e-05, 1.1812e-04, 6.4473e-05, 1.0975e-04, 1.7958e-04, 5.7753e-05, 9.7446e-05, 8.6473e-05], device='cuda:3') 2023-03-08 17:12:49,335 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6479, 4.8203, 4.7798, 4.6704, 4.4197, 4.5898, 5.0436, 5.0954], device='cuda:3'), covar=tensor([0.0074, 0.0077, 0.0098, 0.0084, 0.0112, 0.0119, 0.0079, 0.0093], device='cuda:3'), in_proj_covar=tensor([0.0058, 0.0046, 0.0044, 0.0055, 0.0050, 0.0062, 0.0052, 0.0049], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002], device='cuda:3') 2023-03-08 17:12:51,697 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0197, 4.2107, 2.1513, 4.3598, 4.8568, 2.4236, 3.9123, 3.5488], device='cuda:3'), covar=tensor([0.0062, 0.0774, 0.1470, 0.0333, 0.0036, 0.1315, 0.0475, 0.0820], device='cuda:3'), in_proj_covar=tensor([0.0072, 0.0114, 0.0161, 0.0139, 0.0063, 0.0152, 0.0161, 0.0152], device='cuda:3'), out_proj_covar=tensor([9.4073e-05, 1.6653e-04, 1.9215e-04, 1.7291e-04, 8.2721e-05, 1.8635e-04, 1.9733e-04, 1.9564e-04], device='cuda:3') 2023-03-08 17:12:55,105 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10946.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 17:13:02,668 INFO [train.py:898] (3/4) Epoch 4, batch 50, loss[loss=0.2666, simple_loss=0.3352, pruned_loss=0.09897, over 18386.00 frames. ], tot_loss[loss=0.2603, simple_loss=0.3306, pruned_loss=0.09499, over 825059.95 frames. ], batch size: 50, lr: 2.81e-02, grad_scale: 8.0 2023-03-08 17:13:29,822 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8953, 3.1039, 4.3858, 4.3141, 2.1729, 4.3650, 3.9614, 2.4718], device='cuda:3'), covar=tensor([0.0193, 0.0560, 0.0050, 0.0109, 0.1359, 0.0089, 0.0191, 0.0954], device='cuda:3'), in_proj_covar=tensor([0.0106, 0.0127, 0.0074, 0.0081, 0.0150, 0.0089, 0.0093, 0.0151], device='cuda:3'), out_proj_covar=tensor([1.0393e-04, 1.2347e-04, 7.7498e-05, 7.9486e-05, 1.4642e-04, 8.5213e-05, 9.9794e-05, 1.5202e-04], device='cuda:3') 2023-03-08 17:13:34,048 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.64 vs. limit=2.0 2023-03-08 17:13:48,243 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1127, 2.9814, 4.3445, 4.3054, 2.2184, 4.4020, 3.7636, 2.7436], device='cuda:3'), covar=tensor([0.0183, 0.0694, 0.0057, 0.0146, 0.1521, 0.0093, 0.0275, 0.1012], device='cuda:3'), in_proj_covar=tensor([0.0107, 0.0128, 0.0075, 0.0082, 0.0152, 0.0090, 0.0094, 0.0152], device='cuda:3'), out_proj_covar=tensor([1.0403e-04, 1.2511e-04, 7.8218e-05, 8.0394e-05, 1.4807e-04, 8.5706e-05, 1.0077e-04, 1.5386e-04], device='cuda:3') 2023-03-08 17:14:00,536 INFO [train.py:898] (3/4) Epoch 4, batch 100, loss[loss=0.2369, simple_loss=0.3112, pruned_loss=0.08133, over 18380.00 frames. ], tot_loss[loss=0.263, simple_loss=0.3322, pruned_loss=0.09695, over 1443061.59 frames. ], batch size: 50, lr: 2.80e-02, grad_scale: 8.0 2023-03-08 17:14:19,671 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.738e+02 4.868e+02 5.817e+02 7.449e+02 2.139e+03, threshold=1.163e+03, percent-clipped=6.0 2023-03-08 17:14:46,964 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3112, 3.4044, 2.8025, 2.6504, 3.0307, 2.7293, 2.4778, 3.3967], device='cuda:3'), covar=tensor([0.0065, 0.0078, 0.0248, 0.0135, 0.0161, 0.0213, 0.0253, 0.0144], device='cuda:3'), in_proj_covar=tensor([0.0044, 0.0049, 0.0050, 0.0076, 0.0051, 0.0078, 0.0091, 0.0049], device='cuda:3'), out_proj_covar=tensor([6.5945e-05, 7.7542e-05, 8.2577e-05, 1.1840e-04, 7.9505e-05, 1.2343e-04, 1.5106e-04, 7.9263e-05], device='cuda:3') 2023-03-08 17:14:58,818 INFO [train.py:898] (3/4) Epoch 4, batch 150, loss[loss=0.2923, simple_loss=0.3591, pruned_loss=0.1128, over 18469.00 frames. ], tot_loss[loss=0.2616, simple_loss=0.3312, pruned_loss=0.09598, over 1919265.66 frames. ], batch size: 59, lr: 2.80e-02, grad_scale: 8.0 2023-03-08 17:15:02,579 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0235, 2.8844, 2.6381, 2.4168, 2.8207, 2.5793, 2.3016, 3.0553], device='cuda:3'), covar=tensor([0.0065, 0.0105, 0.0159, 0.0138, 0.0135, 0.0205, 0.0258, 0.0127], device='cuda:3'), in_proj_covar=tensor([0.0044, 0.0049, 0.0050, 0.0076, 0.0051, 0.0078, 0.0091, 0.0049], device='cuda:3'), out_proj_covar=tensor([6.6318e-05, 7.7130e-05, 8.2268e-05, 1.1818e-04, 7.9549e-05, 1.2256e-04, 1.5063e-04, 7.8805e-05], device='cuda:3') 2023-03-08 17:15:12,432 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=11065.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:15:16,998 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.0668, 3.8300, 4.8951, 4.0131, 2.8497, 2.4713, 4.3171, 4.9947], device='cuda:3'), covar=tensor([0.1169, 0.0714, 0.0046, 0.0283, 0.0970, 0.1146, 0.0239, 0.0030], device='cuda:3'), in_proj_covar=tensor([0.0120, 0.0096, 0.0055, 0.0105, 0.0137, 0.0142, 0.0109, 0.0051], device='cuda:3'), out_proj_covar=tensor([1.9458e-04, 1.6977e-04, 9.1056e-05, 1.7176e-04, 2.1015e-04, 2.2020e-04, 1.7529e-04, 8.3834e-05], device='cuda:3') 2023-03-08 17:15:23,582 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9183, 4.6199, 4.9169, 4.7870, 4.5464, 4.6440, 5.1783, 5.2128], device='cuda:3'), covar=tensor([0.0071, 0.0124, 0.0091, 0.0091, 0.0124, 0.0113, 0.0110, 0.0108], device='cuda:3'), in_proj_covar=tensor([0.0059, 0.0047, 0.0044, 0.0056, 0.0051, 0.0063, 0.0052, 0.0050], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002], device='cuda:3') 2023-03-08 17:15:25,227 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.62 vs. limit=2.0 2023-03-08 17:15:45,958 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5932, 5.0720, 5.5919, 5.5085, 5.3559, 6.1129, 5.7386, 5.5583], device='cuda:3'), covar=tensor([0.0697, 0.0621, 0.0603, 0.0460, 0.1379, 0.0687, 0.0481, 0.1533], device='cuda:3'), in_proj_covar=tensor([0.0214, 0.0160, 0.0168, 0.0168, 0.0218, 0.0238, 0.0156, 0.0229], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 17:15:54,011 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=11100.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:15:57,090 INFO [train.py:898] (3/4) Epoch 4, batch 200, loss[loss=0.2803, simple_loss=0.3463, pruned_loss=0.1072, over 16291.00 frames. ], tot_loss[loss=0.2609, simple_loss=0.3305, pruned_loss=0.09562, over 2294001.16 frames. ], batch size: 94, lr: 2.79e-02, grad_scale: 8.0 2023-03-08 17:16:05,214 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=11110.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 17:16:08,774 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=11113.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:16:16,251 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.946e+02 4.745e+02 5.920e+02 7.633e+02 1.560e+03, threshold=1.184e+03, percent-clipped=4.0 2023-03-08 17:16:27,711 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=11130.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:16:55,264 INFO [train.py:898] (3/4) Epoch 4, batch 250, loss[loss=0.2628, simple_loss=0.3349, pruned_loss=0.09532, over 18629.00 frames. ], tot_loss[loss=0.2598, simple_loss=0.3294, pruned_loss=0.09515, over 2574974.57 frames. ], batch size: 52, lr: 2.79e-02, grad_scale: 8.0 2023-03-08 17:17:01,240 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=11158.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:17:11,553 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3611, 2.7146, 2.1956, 2.8426, 3.1502, 3.4129, 2.6790, 3.0779], device='cuda:3'), covar=tensor([0.0350, 0.0237, 0.1051, 0.0318, 0.0284, 0.0188, 0.0408, 0.0211], device='cuda:3'), in_proj_covar=tensor([0.0075, 0.0061, 0.0120, 0.0082, 0.0068, 0.0048, 0.0079, 0.0072], device='cuda:3'), out_proj_covar=tensor([1.4169e-04, 1.1704e-04, 2.0515e-04, 1.4298e-04, 1.2814e-04, 8.3628e-05, 1.4335e-04, 1.2869e-04], device='cuda:3') 2023-03-08 17:17:13,901 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7046, 3.4314, 1.6031, 4.3581, 2.8091, 4.5964, 2.1784, 3.8791], device='cuda:3'), covar=tensor([0.0395, 0.0834, 0.1737, 0.0322, 0.1048, 0.0075, 0.1224, 0.0286], device='cuda:3'), in_proj_covar=tensor([0.0130, 0.0169, 0.0154, 0.0118, 0.0149, 0.0082, 0.0152, 0.0137], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 17:17:19,456 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11174.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:17:24,829 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5697, 5.0986, 5.6970, 5.4732, 5.3918, 6.2499, 5.7057, 5.6196], device='cuda:3'), covar=tensor([0.0574, 0.0488, 0.0492, 0.0496, 0.1195, 0.0485, 0.0477, 0.1188], device='cuda:3'), in_proj_covar=tensor([0.0208, 0.0153, 0.0163, 0.0164, 0.0211, 0.0232, 0.0153, 0.0225], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 17:17:54,162 INFO [train.py:898] (3/4) Epoch 4, batch 300, loss[loss=0.2815, simple_loss=0.3471, pruned_loss=0.108, over 17933.00 frames. ], tot_loss[loss=0.2593, simple_loss=0.3289, pruned_loss=0.09489, over 2799327.21 frames. ], batch size: 65, lr: 2.78e-02, grad_scale: 8.0 2023-03-08 17:18:07,466 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4843, 5.3102, 5.3272, 4.9697, 4.9640, 5.0759, 4.4197, 5.0066], device='cuda:3'), covar=tensor([0.0298, 0.0214, 0.0145, 0.0213, 0.0313, 0.0217, 0.1056, 0.0279], device='cuda:3'), in_proj_covar=tensor([0.0111, 0.0145, 0.0132, 0.0118, 0.0142, 0.0144, 0.0207, 0.0130], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-08 17:18:13,963 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.374e+02 4.613e+02 5.593e+02 6.411e+02 1.138e+03, threshold=1.119e+03, percent-clipped=0.0 2023-03-08 17:18:31,472 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11235.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:18:39,296 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=11241.0, num_to_drop=1, layers_to_drop={3} 2023-03-08 17:18:53,617 INFO [train.py:898] (3/4) Epoch 4, batch 350, loss[loss=0.271, simple_loss=0.3457, pruned_loss=0.09812, over 18290.00 frames. ], tot_loss[loss=0.2577, simple_loss=0.3272, pruned_loss=0.0941, over 2974644.21 frames. ], batch size: 57, lr: 2.78e-02, grad_scale: 8.0 2023-03-08 17:18:53,980 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5031, 5.4480, 4.8645, 5.4226, 5.4451, 4.8435, 5.3803, 4.8828], device='cuda:3'), covar=tensor([0.0333, 0.0301, 0.1700, 0.0604, 0.0304, 0.0410, 0.0301, 0.0572], device='cuda:3'), in_proj_covar=tensor([0.0252, 0.0275, 0.0434, 0.0221, 0.0204, 0.0252, 0.0270, 0.0323], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0005, 0.0003, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-08 17:19:06,900 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5935, 3.5037, 1.8645, 4.3985, 2.9347, 4.6477, 1.9865, 3.8661], device='cuda:3'), covar=tensor([0.0481, 0.0905, 0.1654, 0.0284, 0.0903, 0.0078, 0.1415, 0.0346], device='cuda:3'), in_proj_covar=tensor([0.0132, 0.0173, 0.0155, 0.0119, 0.0149, 0.0085, 0.0154, 0.0138], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 17:19:39,198 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4036, 2.7153, 2.1851, 2.6208, 3.1717, 3.2479, 2.6690, 2.7669], device='cuda:3'), covar=tensor([0.0324, 0.0296, 0.1041, 0.0426, 0.0380, 0.0322, 0.0426, 0.0313], device='cuda:3'), in_proj_covar=tensor([0.0079, 0.0061, 0.0123, 0.0088, 0.0069, 0.0050, 0.0082, 0.0076], device='cuda:3'), out_proj_covar=tensor([1.4797e-04, 1.1772e-04, 2.1194e-04, 1.5309e-04, 1.2975e-04, 8.8091e-05, 1.4894e-04, 1.3510e-04], device='cuda:3') 2023-03-08 17:19:52,173 INFO [train.py:898] (3/4) Epoch 4, batch 400, loss[loss=0.2946, simple_loss=0.3576, pruned_loss=0.1158, over 17836.00 frames. ], tot_loss[loss=0.2567, simple_loss=0.3266, pruned_loss=0.09342, over 3110526.22 frames. ], batch size: 70, lr: 2.77e-02, grad_scale: 8.0 2023-03-08 17:20:09,478 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.46 vs. limit=5.0 2023-03-08 17:20:10,946 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.476e+02 4.514e+02 5.687e+02 6.910e+02 1.325e+03, threshold=1.137e+03, percent-clipped=4.0 2023-03-08 17:20:50,454 INFO [train.py:898] (3/4) Epoch 4, batch 450, loss[loss=0.2689, simple_loss=0.3421, pruned_loss=0.09786, over 17110.00 frames. ], tot_loss[loss=0.2568, simple_loss=0.3269, pruned_loss=0.09332, over 3217461.90 frames. ], batch size: 78, lr: 2.77e-02, grad_scale: 8.0 2023-03-08 17:21:46,163 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=11400.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:21:49,048 INFO [train.py:898] (3/4) Epoch 4, batch 500, loss[loss=0.2768, simple_loss=0.3526, pruned_loss=0.1005, over 18232.00 frames. ], tot_loss[loss=0.2575, simple_loss=0.3272, pruned_loss=0.09392, over 3292426.56 frames. ], batch size: 60, lr: 2.76e-02, grad_scale: 4.0 2023-03-08 17:22:09,638 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.774e+02 5.147e+02 6.792e+02 8.631e+02 2.583e+03, threshold=1.358e+03, percent-clipped=9.0 2023-03-08 17:22:20,418 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=11430.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:22:40,931 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=11448.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:22:46,349 INFO [train.py:898] (3/4) Epoch 4, batch 550, loss[loss=0.2688, simple_loss=0.3327, pruned_loss=0.1024, over 16208.00 frames. ], tot_loss[loss=0.2569, simple_loss=0.3267, pruned_loss=0.09355, over 3369407.17 frames. ], batch size: 94, lr: 2.76e-02, grad_scale: 4.0 2023-03-08 17:23:05,267 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=2.03 vs. limit=2.0 2023-03-08 17:23:17,132 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=11478.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:23:18,616 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.74 vs. limit=5.0 2023-03-08 17:23:38,872 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4296, 3.3115, 2.9997, 2.5821, 3.0330, 2.6752, 2.2681, 3.1766], device='cuda:3'), covar=tensor([0.0038, 0.0059, 0.0086, 0.0116, 0.0089, 0.0136, 0.0223, 0.0098], device='cuda:3'), in_proj_covar=tensor([0.0043, 0.0049, 0.0051, 0.0077, 0.0052, 0.0078, 0.0090, 0.0049], device='cuda:3'), out_proj_covar=tensor([6.4173e-05, 7.9138e-05, 8.4529e-05, 1.2193e-04, 8.1863e-05, 1.2337e-04, 1.4848e-04, 7.8596e-05], device='cuda:3') 2023-03-08 17:23:45,218 INFO [train.py:898] (3/4) Epoch 4, batch 600, loss[loss=0.2572, simple_loss=0.3295, pruned_loss=0.09242, over 18555.00 frames. ], tot_loss[loss=0.2576, simple_loss=0.3269, pruned_loss=0.0941, over 3401210.75 frames. ], batch size: 49, lr: 2.75e-02, grad_scale: 4.0 2023-03-08 17:24:07,206 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.433e+02 4.507e+02 5.343e+02 7.594e+02 1.826e+03, threshold=1.069e+03, percent-clipped=2.0 2023-03-08 17:24:17,745 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=11530.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:24:20,116 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4929, 5.0567, 5.6339, 5.4205, 5.3082, 6.0307, 5.6251, 5.5295], device='cuda:3'), covar=tensor([0.0664, 0.0468, 0.0577, 0.0463, 0.1092, 0.0597, 0.0532, 0.1059], device='cuda:3'), in_proj_covar=tensor([0.0208, 0.0156, 0.0162, 0.0160, 0.0207, 0.0229, 0.0152, 0.0226], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 17:24:20,244 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11532.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:24:30,016 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=11541.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 17:24:43,293 INFO [train.py:898] (3/4) Epoch 4, batch 650, loss[loss=0.2492, simple_loss=0.3255, pruned_loss=0.08644, over 18004.00 frames. ], tot_loss[loss=0.2579, simple_loss=0.3277, pruned_loss=0.09399, over 3436082.86 frames. ], batch size: 65, lr: 2.75e-02, grad_scale: 4.0 2023-03-08 17:25:25,673 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=11589.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 17:25:30,303 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11593.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:25:41,337 INFO [train.py:898] (3/4) Epoch 4, batch 700, loss[loss=0.2499, simple_loss=0.3228, pruned_loss=0.08846, over 18289.00 frames. ], tot_loss[loss=0.2577, simple_loss=0.3276, pruned_loss=0.09391, over 3460433.58 frames. ], batch size: 49, lr: 2.74e-02, grad_scale: 4.0 2023-03-08 17:26:01,046 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.56 vs. limit=2.0 2023-03-08 17:26:03,621 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.297e+02 5.310e+02 6.723e+02 8.319e+02 1.846e+03, threshold=1.345e+03, percent-clipped=5.0 2023-03-08 17:26:19,750 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11635.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:26:39,985 INFO [train.py:898] (3/4) Epoch 4, batch 750, loss[loss=0.2882, simple_loss=0.3601, pruned_loss=0.1082, over 18295.00 frames. ], tot_loss[loss=0.2572, simple_loss=0.3272, pruned_loss=0.09359, over 3493308.46 frames. ], batch size: 57, lr: 2.74e-02, grad_scale: 4.0 2023-03-08 17:26:59,034 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.3198, 5.2569, 2.7140, 5.1420, 4.8628, 5.2229, 5.0684, 2.6528], device='cuda:3'), covar=tensor([0.0133, 0.0048, 0.0817, 0.0060, 0.0079, 0.0069, 0.0107, 0.1201], device='cuda:3'), in_proj_covar=tensor([0.0059, 0.0042, 0.0077, 0.0051, 0.0052, 0.0044, 0.0057, 0.0085], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0004], device='cuda:3') 2023-03-08 17:27:01,337 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4643, 4.6958, 2.6945, 4.9710, 5.3567, 2.6636, 4.4378, 4.0952], device='cuda:3'), covar=tensor([0.0049, 0.0634, 0.1460, 0.0263, 0.0044, 0.1300, 0.0462, 0.0693], device='cuda:3'), in_proj_covar=tensor([0.0074, 0.0125, 0.0164, 0.0146, 0.0067, 0.0153, 0.0170, 0.0158], device='cuda:3'), out_proj_covar=tensor([9.8545e-05, 1.8173e-04, 1.9921e-04, 1.8335e-04, 8.9491e-05, 1.9207e-04, 2.0991e-04, 2.0573e-04], device='cuda:3') 2023-03-08 17:27:31,248 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11696.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:27:34,710 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11699.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:27:38,700 INFO [train.py:898] (3/4) Epoch 4, batch 800, loss[loss=0.2955, simple_loss=0.3618, pruned_loss=0.1145, over 17924.00 frames. ], tot_loss[loss=0.256, simple_loss=0.3262, pruned_loss=0.09291, over 3519320.05 frames. ], batch size: 65, lr: 2.73e-02, grad_scale: 8.0 2023-03-08 17:27:54,051 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5658, 5.5066, 3.0941, 5.3781, 5.2707, 5.5114, 5.3189, 3.0939], device='cuda:3'), covar=tensor([0.0130, 0.0064, 0.0758, 0.0052, 0.0077, 0.0077, 0.0123, 0.1121], device='cuda:3'), in_proj_covar=tensor([0.0060, 0.0044, 0.0079, 0.0052, 0.0054, 0.0046, 0.0059, 0.0087], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0004], device='cuda:3') 2023-03-08 17:28:00,217 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.738e+02 5.059e+02 5.928e+02 8.034e+02 1.891e+03, threshold=1.186e+03, percent-clipped=2.0 2023-03-08 17:28:37,645 INFO [train.py:898] (3/4) Epoch 4, batch 850, loss[loss=0.2188, simple_loss=0.2896, pruned_loss=0.07402, over 18575.00 frames. ], tot_loss[loss=0.2566, simple_loss=0.3262, pruned_loss=0.0935, over 3530502.61 frames. ], batch size: 45, lr: 2.73e-02, grad_scale: 8.0 2023-03-08 17:28:46,066 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11760.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:29:00,438 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-08 17:29:06,630 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-08 17:29:37,624 INFO [train.py:898] (3/4) Epoch 4, batch 900, loss[loss=0.1928, simple_loss=0.2733, pruned_loss=0.05616, over 18234.00 frames. ], tot_loss[loss=0.2558, simple_loss=0.3258, pruned_loss=0.09293, over 3545463.24 frames. ], batch size: 45, lr: 2.72e-02, grad_scale: 8.0 2023-03-08 17:29:52,682 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11816.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:29:58,018 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.973e+02 4.760e+02 5.740e+02 6.849e+02 1.554e+03, threshold=1.148e+03, percent-clipped=4.0 2023-03-08 17:30:11,196 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=11830.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:30:29,293 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11846.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:30:36,938 INFO [train.py:898] (3/4) Epoch 4, batch 950, loss[loss=0.2473, simple_loss=0.3186, pruned_loss=0.088, over 18379.00 frames. ], tot_loss[loss=0.2555, simple_loss=0.3255, pruned_loss=0.09277, over 3552790.55 frames. ], batch size: 50, lr: 2.72e-02, grad_scale: 8.0 2023-03-08 17:31:05,093 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11877.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:31:06,063 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=11878.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:31:18,406 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=11888.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:31:22,527 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-08 17:31:35,205 INFO [train.py:898] (3/4) Epoch 4, batch 1000, loss[loss=0.2611, simple_loss=0.3254, pruned_loss=0.0984, over 18502.00 frames. ], tot_loss[loss=0.2547, simple_loss=0.3252, pruned_loss=0.0921, over 3564345.48 frames. ], batch size: 47, lr: 2.71e-02, grad_scale: 8.0 2023-03-08 17:31:40,147 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11907.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:31:48,147 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5705, 5.5939, 4.8894, 5.5724, 5.4711, 5.0187, 5.4911, 5.0429], device='cuda:3'), covar=tensor([0.0302, 0.0239, 0.1616, 0.0459, 0.0325, 0.0336, 0.0278, 0.0603], device='cuda:3'), in_proj_covar=tensor([0.0260, 0.0276, 0.0437, 0.0228, 0.0212, 0.0258, 0.0274, 0.0332], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0005, 0.0003, 0.0003, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-08 17:31:52,848 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7384, 3.3419, 3.0416, 3.0633, 3.3542, 2.7676, 2.8017, 3.7198], device='cuda:3'), covar=tensor([0.0037, 0.0103, 0.0152, 0.0115, 0.0087, 0.0211, 0.0215, 0.0065], device='cuda:3'), in_proj_covar=tensor([0.0046, 0.0053, 0.0054, 0.0076, 0.0052, 0.0081, 0.0092, 0.0049], device='cuda:3'), out_proj_covar=tensor([6.8464e-05, 8.5586e-05, 8.9161e-05, 1.2044e-04, 8.1315e-05, 1.2888e-04, 1.5103e-04, 7.7963e-05], device='cuda:3') 2023-03-08 17:31:54,912 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11920.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:31:55,672 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.070e+02 4.938e+02 5.692e+02 6.896e+02 1.055e+03, threshold=1.138e+03, percent-clipped=0.0 2023-03-08 17:32:05,079 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.98 vs. limit=2.0 2023-03-08 17:32:28,321 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11948.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:32:31,990 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.96 vs. limit=2.0 2023-03-08 17:32:33,400 INFO [train.py:898] (3/4) Epoch 4, batch 1050, loss[loss=0.2662, simple_loss=0.3362, pruned_loss=0.09805, over 18089.00 frames. ], tot_loss[loss=0.2549, simple_loss=0.3253, pruned_loss=0.09223, over 3564869.55 frames. ], batch size: 62, lr: 2.71e-02, grad_scale: 8.0 2023-03-08 17:33:05,223 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11981.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:33:07,708 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8735, 4.8210, 4.2490, 4.8114, 4.7733, 4.2664, 4.7480, 4.3244], device='cuda:3'), covar=tensor([0.0326, 0.0353, 0.1559, 0.0618, 0.0323, 0.0377, 0.0339, 0.0631], device='cuda:3'), in_proj_covar=tensor([0.0268, 0.0286, 0.0443, 0.0233, 0.0213, 0.0263, 0.0282, 0.0335], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0003, 0.0003, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-08 17:33:08,927 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11984.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:33:17,596 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=11991.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:33:35,192 INFO [train.py:898] (3/4) Epoch 4, batch 1100, loss[loss=0.2206, simple_loss=0.2884, pruned_loss=0.07641, over 18512.00 frames. ], tot_loss[loss=0.2556, simple_loss=0.3259, pruned_loss=0.09269, over 3579956.41 frames. ], batch size: 44, lr: 2.70e-02, grad_scale: 4.0 2023-03-08 17:33:42,497 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=12009.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:33:56,849 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.224e+02 5.074e+02 6.054e+02 6.990e+02 2.904e+03, threshold=1.211e+03, percent-clipped=5.0 2023-03-08 17:34:25,535 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=12045.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:34:34,308 INFO [train.py:898] (3/4) Epoch 4, batch 1150, loss[loss=0.2562, simple_loss=0.335, pruned_loss=0.08868, over 18333.00 frames. ], tot_loss[loss=0.2555, simple_loss=0.3257, pruned_loss=0.09264, over 3576866.54 frames. ], batch size: 55, lr: 2.70e-02, grad_scale: 4.0 2023-03-08 17:34:36,861 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12055.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:35:00,953 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=12076.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:35:27,746 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5942, 2.8060, 2.6357, 2.8453, 3.4080, 3.5571, 2.8223, 3.3763], device='cuda:3'), covar=tensor([0.0318, 0.0218, 0.0741, 0.0291, 0.0202, 0.0196, 0.0335, 0.0180], device='cuda:3'), in_proj_covar=tensor([0.0081, 0.0059, 0.0122, 0.0089, 0.0069, 0.0050, 0.0079, 0.0076], device='cuda:3'), out_proj_covar=tensor([1.5624e-04, 1.1605e-04, 2.1187e-04, 1.6074e-04, 1.3287e-04, 9.1139e-05, 1.4910e-04, 1.4088e-04], device='cuda:3') 2023-03-08 17:35:32,891 INFO [train.py:898] (3/4) Epoch 4, batch 1200, loss[loss=0.2211, simple_loss=0.2859, pruned_loss=0.07809, over 18474.00 frames. ], tot_loss[loss=0.2539, simple_loss=0.324, pruned_loss=0.09187, over 3580476.60 frames. ], batch size: 44, lr: 2.69e-02, grad_scale: 8.0 2023-03-08 17:35:54,812 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.657e+02 4.866e+02 5.977e+02 7.705e+02 1.703e+03, threshold=1.195e+03, percent-clipped=4.0 2023-03-08 17:35:55,302 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4280, 5.5628, 3.3511, 5.2808, 5.2855, 5.5857, 5.4195, 3.0325], device='cuda:3'), covar=tensor([0.0130, 0.0042, 0.0638, 0.0056, 0.0051, 0.0051, 0.0073, 0.1016], device='cuda:3'), in_proj_covar=tensor([0.0060, 0.0045, 0.0079, 0.0053, 0.0053, 0.0046, 0.0059, 0.0087], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0004], device='cuda:3') 2023-03-08 17:36:03,440 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9423, 2.9011, 1.7799, 3.2150, 2.5608, 3.3050, 1.9853, 2.8426], device='cuda:3'), covar=tensor([0.0354, 0.0558, 0.1067, 0.0354, 0.0561, 0.0149, 0.0933, 0.0309], device='cuda:3'), in_proj_covar=tensor([0.0135, 0.0177, 0.0156, 0.0127, 0.0156, 0.0089, 0.0159, 0.0141], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 17:36:12,289 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=12137.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:36:31,683 INFO [train.py:898] (3/4) Epoch 4, batch 1250, loss[loss=0.2567, simple_loss=0.3354, pruned_loss=0.08904, over 18389.00 frames. ], tot_loss[loss=0.2539, simple_loss=0.3239, pruned_loss=0.09199, over 3574794.47 frames. ], batch size: 52, lr: 2.69e-02, grad_scale: 8.0 2023-03-08 17:36:53,290 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12172.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:37:01,256 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8829, 3.0593, 4.2854, 4.1674, 2.3823, 4.2648, 4.0050, 2.4794], device='cuda:3'), covar=tensor([0.0241, 0.0813, 0.0072, 0.0108, 0.1502, 0.0126, 0.0192, 0.1124], device='cuda:3'), in_proj_covar=tensor([0.0113, 0.0145, 0.0081, 0.0090, 0.0165, 0.0099, 0.0103, 0.0155], device='cuda:3'), out_proj_covar=tensor([1.0996e-04, 1.4089e-04, 8.4807e-05, 8.7880e-05, 1.5865e-04, 9.3980e-05, 1.0960e-04, 1.5696e-04], device='cuda:3') 2023-03-08 17:37:11,274 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12188.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:37:29,463 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12202.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:37:30,435 INFO [train.py:898] (3/4) Epoch 4, batch 1300, loss[loss=0.2519, simple_loss=0.3271, pruned_loss=0.08836, over 17727.00 frames. ], tot_loss[loss=0.2542, simple_loss=0.3243, pruned_loss=0.09199, over 3583308.67 frames. ], batch size: 70, lr: 2.68e-02, grad_scale: 4.0 2023-03-08 17:37:52,910 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.709e+02 4.849e+02 5.902e+02 7.729e+02 1.516e+03, threshold=1.180e+03, percent-clipped=2.0 2023-03-08 17:37:55,649 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6071, 3.3657, 4.2434, 4.1839, 2.3089, 4.1830, 3.8261, 2.7151], device='cuda:3'), covar=tensor([0.0322, 0.0638, 0.0071, 0.0115, 0.1622, 0.0108, 0.0312, 0.0835], device='cuda:3'), in_proj_covar=tensor([0.0111, 0.0141, 0.0079, 0.0088, 0.0164, 0.0098, 0.0103, 0.0151], device='cuda:3'), out_proj_covar=tensor([1.0890e-04, 1.3672e-04, 8.3269e-05, 8.5733e-05, 1.5862e-04, 9.3505e-05, 1.0933e-04, 1.5267e-04], device='cuda:3') 2023-03-08 17:38:07,782 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=12236.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:38:29,665 INFO [train.py:898] (3/4) Epoch 4, batch 1350, loss[loss=0.2156, simple_loss=0.2935, pruned_loss=0.06886, over 18253.00 frames. ], tot_loss[loss=0.2533, simple_loss=0.3234, pruned_loss=0.09165, over 3586261.99 frames. ], batch size: 47, lr: 2.68e-02, grad_scale: 4.0 2023-03-08 17:38:56,226 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12276.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:39:13,231 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12291.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:39:16,680 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5304, 2.6304, 2.6515, 2.9229, 3.2892, 3.5231, 2.9764, 3.3039], device='cuda:3'), covar=tensor([0.0386, 0.0388, 0.0727, 0.0307, 0.0274, 0.0166, 0.0442, 0.0225], device='cuda:3'), in_proj_covar=tensor([0.0086, 0.0062, 0.0124, 0.0089, 0.0074, 0.0051, 0.0084, 0.0079], device='cuda:3'), out_proj_covar=tensor([1.6574e-04, 1.2201e-04, 2.1753e-04, 1.6244e-04, 1.4216e-04, 9.0791e-05, 1.5799e-04, 1.4897e-04], device='cuda:3') 2023-03-08 17:39:27,798 INFO [train.py:898] (3/4) Epoch 4, batch 1400, loss[loss=0.2813, simple_loss=0.3535, pruned_loss=0.1046, over 18142.00 frames. ], tot_loss[loss=0.2531, simple_loss=0.3228, pruned_loss=0.09171, over 3580584.59 frames. ], batch size: 62, lr: 2.67e-02, grad_scale: 4.0 2023-03-08 17:39:29,719 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12304.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:39:38,028 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1904, 5.2925, 2.2136, 4.9553, 4.9961, 5.2689, 5.0771, 2.5534], device='cuda:3'), covar=tensor([0.0145, 0.0046, 0.0927, 0.0068, 0.0056, 0.0071, 0.0079, 0.1170], device='cuda:3'), in_proj_covar=tensor([0.0061, 0.0046, 0.0078, 0.0053, 0.0053, 0.0047, 0.0057, 0.0086], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0004], device='cuda:3') 2023-03-08 17:39:51,204 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.094e+02 5.084e+02 6.027e+02 7.946e+02 1.309e+03, threshold=1.205e+03, percent-clipped=1.0 2023-03-08 17:40:09,878 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=12339.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:40:11,031 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12340.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:40:26,788 INFO [train.py:898] (3/4) Epoch 4, batch 1450, loss[loss=0.2652, simple_loss=0.3375, pruned_loss=0.09645, over 16121.00 frames. ], tot_loss[loss=0.2534, simple_loss=0.3231, pruned_loss=0.09181, over 3571811.00 frames. ], batch size: 94, lr: 2.67e-02, grad_scale: 4.0 2023-03-08 17:40:29,936 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12355.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:40:32,119 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6977, 5.2422, 4.9436, 4.9778, 4.7998, 4.9479, 5.3251, 5.2397], device='cuda:3'), covar=tensor([0.1039, 0.0482, 0.0628, 0.0611, 0.1220, 0.0482, 0.0416, 0.0503], device='cuda:3'), in_proj_covar=tensor([0.0344, 0.0289, 0.0222, 0.0303, 0.0429, 0.0305, 0.0327, 0.0286], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0005, 0.0004, 0.0004, 0.0003], device='cuda:3') 2023-03-08 17:40:41,071 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4236, 3.2494, 1.4045, 4.0929, 2.7934, 4.3997, 1.9731, 3.7693], device='cuda:3'), covar=tensor([0.0530, 0.1019, 0.1932, 0.0515, 0.1101, 0.0159, 0.1387, 0.0366], device='cuda:3'), in_proj_covar=tensor([0.0136, 0.0179, 0.0159, 0.0132, 0.0157, 0.0090, 0.0158, 0.0143], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 17:41:22,779 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.89 vs. limit=5.0 2023-03-08 17:41:24,333 INFO [train.py:898] (3/4) Epoch 4, batch 1500, loss[loss=0.2542, simple_loss=0.3267, pruned_loss=0.09083, over 18408.00 frames. ], tot_loss[loss=0.2538, simple_loss=0.3237, pruned_loss=0.09196, over 3583783.90 frames. ], batch size: 48, lr: 2.66e-02, grad_scale: 4.0 2023-03-08 17:41:24,589 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=12403.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:41:24,917 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1032, 1.6539, 4.1300, 2.7427, 3.2826, 4.8235, 4.0576, 4.1141], device='cuda:3'), covar=tensor([0.0272, 0.0809, 0.0210, 0.0494, 0.0931, 0.0030, 0.0191, 0.0135], device='cuda:3'), in_proj_covar=tensor([0.0120, 0.0158, 0.0091, 0.0154, 0.0239, 0.0086, 0.0129, 0.0118], device='cuda:3'), out_proj_covar=tensor([9.7603e-05, 1.2757e-04, 7.9781e-05, 1.1511e-04, 1.9045e-04, 6.3417e-05, 1.0537e-04, 9.3540e-05], device='cuda:3') 2023-03-08 17:41:48,830 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.804e+02 4.633e+02 5.942e+02 7.166e+02 1.765e+03, threshold=1.188e+03, percent-clipped=4.0 2023-03-08 17:41:59,145 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12432.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:42:15,151 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=12446.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:42:22,762 INFO [train.py:898] (3/4) Epoch 4, batch 1550, loss[loss=0.2414, simple_loss=0.3093, pruned_loss=0.08679, over 18291.00 frames. ], tot_loss[loss=0.2529, simple_loss=0.3229, pruned_loss=0.09144, over 3579206.03 frames. ], batch size: 49, lr: 2.66e-02, grad_scale: 4.0 2023-03-08 17:42:45,730 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12472.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:43:19,505 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12502.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:43:20,381 INFO [train.py:898] (3/4) Epoch 4, batch 1600, loss[loss=0.2283, simple_loss=0.3081, pruned_loss=0.07426, over 18543.00 frames. ], tot_loss[loss=0.2545, simple_loss=0.3239, pruned_loss=0.09255, over 3565944.04 frames. ], batch size: 49, lr: 2.65e-02, grad_scale: 8.0 2023-03-08 17:43:25,832 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=12507.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:43:41,778 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=12520.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:43:44,949 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.642e+02 4.900e+02 6.087e+02 7.463e+02 1.966e+03, threshold=1.217e+03, percent-clipped=9.0 2023-03-08 17:43:49,069 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-08 17:44:13,909 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4626, 2.7565, 2.7461, 2.7431, 3.3087, 3.4776, 2.7383, 3.2936], device='cuda:3'), covar=tensor([0.0358, 0.0406, 0.0709, 0.0333, 0.0302, 0.0282, 0.0537, 0.0226], device='cuda:3'), in_proj_covar=tensor([0.0084, 0.0063, 0.0118, 0.0088, 0.0072, 0.0052, 0.0083, 0.0076], device='cuda:3'), out_proj_covar=tensor([1.6302e-04, 1.2321e-04, 2.0808e-04, 1.6104e-04, 1.3831e-04, 9.3071e-05, 1.5710e-04, 1.4394e-04], device='cuda:3') 2023-03-08 17:44:15,884 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=12550.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:44:19,003 INFO [train.py:898] (3/4) Epoch 4, batch 1650, loss[loss=0.269, simple_loss=0.3353, pruned_loss=0.1013, over 15735.00 frames. ], tot_loss[loss=0.2524, simple_loss=0.3222, pruned_loss=0.09131, over 3575685.44 frames. ], batch size: 94, lr: 2.65e-02, grad_scale: 8.0 2023-03-08 17:44:48,154 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12576.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:45:18,522 INFO [train.py:898] (3/4) Epoch 4, batch 1700, loss[loss=0.2838, simple_loss=0.3535, pruned_loss=0.1071, over 18251.00 frames. ], tot_loss[loss=0.2516, simple_loss=0.3219, pruned_loss=0.09062, over 3581366.80 frames. ], batch size: 60, lr: 2.65e-02, grad_scale: 8.0 2023-03-08 17:45:20,064 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12604.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:45:43,561 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.156e+02 4.705e+02 5.777e+02 7.062e+02 1.643e+03, threshold=1.155e+03, percent-clipped=4.0 2023-03-08 17:45:44,933 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=12624.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:45:48,788 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-08 17:45:50,804 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4918, 4.3373, 4.6501, 3.5578, 3.6539, 3.1046, 2.5235, 2.0345], device='cuda:3'), covar=tensor([0.0212, 0.0224, 0.0057, 0.0186, 0.0343, 0.0175, 0.0614, 0.0847], device='cuda:3'), in_proj_covar=tensor([0.0030, 0.0032, 0.0026, 0.0036, 0.0052, 0.0028, 0.0053, 0.0055], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-08 17:46:02,702 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12640.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:46:11,646 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7745, 5.2601, 5.3213, 5.0427, 4.9420, 5.0891, 4.5843, 5.0414], device='cuda:3'), covar=tensor([0.0178, 0.0274, 0.0155, 0.0233, 0.0337, 0.0260, 0.0976, 0.0302], device='cuda:3'), in_proj_covar=tensor([0.0113, 0.0155, 0.0135, 0.0128, 0.0146, 0.0154, 0.0217, 0.0137], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0005, 0.0004], device='cuda:3') 2023-03-08 17:46:15,973 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=12652.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:46:16,913 INFO [train.py:898] (3/4) Epoch 4, batch 1750, loss[loss=0.2182, simple_loss=0.2935, pruned_loss=0.07146, over 18410.00 frames. ], tot_loss[loss=0.2513, simple_loss=0.322, pruned_loss=0.0903, over 3584682.03 frames. ], batch size: 48, lr: 2.64e-02, grad_scale: 8.0 2023-03-08 17:46:51,285 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-08 17:46:58,850 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=12688.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:47:15,808 INFO [train.py:898] (3/4) Epoch 4, batch 1800, loss[loss=0.2517, simple_loss=0.3305, pruned_loss=0.08642, over 18390.00 frames. ], tot_loss[loss=0.2516, simple_loss=0.3225, pruned_loss=0.09036, over 3579548.48 frames. ], batch size: 52, lr: 2.64e-02, grad_scale: 8.0 2023-03-08 17:47:39,918 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.294e+02 5.100e+02 5.998e+02 7.451e+02 2.129e+03, threshold=1.200e+03, percent-clipped=3.0 2023-03-08 17:47:51,110 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12732.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:47:58,283 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3827, 3.5197, 1.6357, 4.3444, 2.8312, 4.6588, 2.0738, 4.0634], device='cuda:3'), covar=tensor([0.0508, 0.0751, 0.1654, 0.0319, 0.0939, 0.0085, 0.1207, 0.0261], device='cuda:3'), in_proj_covar=tensor([0.0139, 0.0178, 0.0158, 0.0134, 0.0153, 0.0093, 0.0160, 0.0142], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 17:48:15,104 INFO [train.py:898] (3/4) Epoch 4, batch 1850, loss[loss=0.3436, simple_loss=0.377, pruned_loss=0.1551, over 12546.00 frames. ], tot_loss[loss=0.2522, simple_loss=0.323, pruned_loss=0.09069, over 3577417.50 frames. ], batch size: 129, lr: 2.63e-02, grad_scale: 8.0 2023-03-08 17:48:26,885 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=12763.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:48:47,862 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=12780.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:49:10,742 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3543, 3.0404, 2.8628, 2.8100, 2.8142, 2.3576, 2.4149, 3.2952], device='cuda:3'), covar=tensor([0.0048, 0.0071, 0.0111, 0.0115, 0.0112, 0.0179, 0.0215, 0.0065], device='cuda:3'), in_proj_covar=tensor([0.0046, 0.0055, 0.0055, 0.0082, 0.0053, 0.0086, 0.0097, 0.0050], device='cuda:3'), out_proj_covar=tensor([6.9640e-05, 8.7872e-05, 9.0892e-05, 1.3215e-04, 8.1725e-05, 1.3690e-04, 1.5875e-04, 8.0624e-05], device='cuda:3') 2023-03-08 17:49:12,644 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12802.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:49:13,576 INFO [train.py:898] (3/4) Epoch 4, batch 1900, loss[loss=0.2692, simple_loss=0.3398, pruned_loss=0.09928, over 17184.00 frames. ], tot_loss[loss=0.2515, simple_loss=0.3222, pruned_loss=0.09045, over 3581762.04 frames. ], batch size: 78, lr: 2.63e-02, grad_scale: 8.0 2023-03-08 17:49:14,383 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.96 vs. limit=2.0 2023-03-08 17:49:36,777 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.387e+02 4.784e+02 5.928e+02 6.975e+02 1.301e+03, threshold=1.186e+03, percent-clipped=1.0 2023-03-08 17:49:38,313 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=12824.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 17:49:57,972 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.58 vs. limit=2.0 2023-03-08 17:50:11,762 INFO [train.py:898] (3/4) Epoch 4, batch 1950, loss[loss=0.2558, simple_loss=0.3216, pruned_loss=0.095, over 18261.00 frames. ], tot_loss[loss=0.2508, simple_loss=0.3216, pruned_loss=0.08998, over 3592205.02 frames. ], batch size: 47, lr: 2.62e-02, grad_scale: 8.0 2023-03-08 17:50:30,024 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3909, 5.3782, 4.7946, 5.3195, 5.4105, 4.7269, 5.3609, 4.8242], device='cuda:3'), covar=tensor([0.0325, 0.0306, 0.1326, 0.0536, 0.0276, 0.0382, 0.0255, 0.0596], device='cuda:3'), in_proj_covar=tensor([0.0267, 0.0289, 0.0438, 0.0235, 0.0217, 0.0266, 0.0289, 0.0346], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0005, 0.0003, 0.0003, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-08 17:51:08,912 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7125, 3.3261, 3.2527, 3.0431, 3.4431, 2.7772, 2.7652, 3.5882], device='cuda:3'), covar=tensor([0.0033, 0.0080, 0.0089, 0.0116, 0.0086, 0.0163, 0.0161, 0.0074], device='cuda:3'), in_proj_covar=tensor([0.0047, 0.0057, 0.0056, 0.0085, 0.0054, 0.0088, 0.0100, 0.0052], device='cuda:3'), out_proj_covar=tensor([7.1559e-05, 9.1454e-05, 9.2765e-05, 1.3794e-04, 8.5191e-05, 1.4029e-04, 1.6282e-04, 8.4066e-05], device='cuda:3') 2023-03-08 17:51:10,760 INFO [train.py:898] (3/4) Epoch 4, batch 2000, loss[loss=0.2155, simple_loss=0.2853, pruned_loss=0.07285, over 18413.00 frames. ], tot_loss[loss=0.2514, simple_loss=0.3222, pruned_loss=0.09024, over 3581769.08 frames. ], batch size: 42, lr: 2.62e-02, grad_scale: 8.0 2023-03-08 17:51:32,683 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5791, 5.2575, 5.2760, 5.0638, 4.9501, 5.1687, 4.4961, 5.1081], device='cuda:3'), covar=tensor([0.0221, 0.0246, 0.0175, 0.0192, 0.0390, 0.0219, 0.1291, 0.0265], device='cuda:3'), in_proj_covar=tensor([0.0116, 0.0155, 0.0136, 0.0127, 0.0147, 0.0154, 0.0222, 0.0137], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0005, 0.0004], device='cuda:3') 2023-03-08 17:51:33,458 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.610e+02 4.786e+02 5.613e+02 6.495e+02 1.703e+03, threshold=1.123e+03, percent-clipped=3.0 2023-03-08 17:52:08,666 INFO [train.py:898] (3/4) Epoch 4, batch 2050, loss[loss=0.2453, simple_loss=0.3228, pruned_loss=0.08395, over 18301.00 frames. ], tot_loss[loss=0.2531, simple_loss=0.3239, pruned_loss=0.09113, over 3576917.00 frames. ], batch size: 54, lr: 2.61e-02, grad_scale: 8.0 2023-03-08 17:52:27,971 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-08 17:52:51,776 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.1093, 3.9093, 5.2033, 4.2074, 3.3717, 2.8996, 4.6611, 5.1676], device='cuda:3'), covar=tensor([0.0966, 0.0825, 0.0052, 0.0237, 0.0702, 0.0923, 0.0193, 0.0034], device='cuda:3'), in_proj_covar=tensor([0.0121, 0.0128, 0.0060, 0.0118, 0.0145, 0.0148, 0.0118, 0.0058], device='cuda:3'), out_proj_covar=tensor([1.9812e-04, 2.1692e-04, 1.0517e-04, 1.9365e-04, 2.2417e-04, 2.3311e-04, 1.9088e-04, 9.4123e-05], device='cuda:3') 2023-03-08 17:53:07,611 INFO [train.py:898] (3/4) Epoch 4, batch 2100, loss[loss=0.3017, simple_loss=0.3609, pruned_loss=0.1213, over 18197.00 frames. ], tot_loss[loss=0.2542, simple_loss=0.3246, pruned_loss=0.09183, over 3564600.14 frames. ], batch size: 60, lr: 2.61e-02, grad_scale: 8.0 2023-03-08 17:53:08,049 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3687, 3.2612, 1.8593, 4.0281, 2.6387, 4.2244, 2.0769, 3.6099], device='cuda:3'), covar=tensor([0.0490, 0.0900, 0.1472, 0.0370, 0.0951, 0.0129, 0.1201, 0.0321], device='cuda:3'), in_proj_covar=tensor([0.0140, 0.0179, 0.0157, 0.0137, 0.0154, 0.0094, 0.0159, 0.0142], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 17:53:29,877 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.835e+02 4.911e+02 6.247e+02 7.223e+02 1.206e+03, threshold=1.249e+03, percent-clipped=2.0 2023-03-08 17:54:06,019 INFO [train.py:898] (3/4) Epoch 4, batch 2150, loss[loss=0.2386, simple_loss=0.3094, pruned_loss=0.08394, over 18538.00 frames. ], tot_loss[loss=0.254, simple_loss=0.3243, pruned_loss=0.09189, over 3556991.09 frames. ], batch size: 49, lr: 2.61e-02, grad_scale: 8.0 2023-03-08 17:55:03,426 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=13102.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:55:04,392 INFO [train.py:898] (3/4) Epoch 4, batch 2200, loss[loss=0.2395, simple_loss=0.3195, pruned_loss=0.07979, over 18243.00 frames. ], tot_loss[loss=0.2534, simple_loss=0.3236, pruned_loss=0.09156, over 3564980.98 frames. ], batch size: 60, lr: 2.60e-02, grad_scale: 8.0 2023-03-08 17:55:22,898 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=13119.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 17:55:27,147 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.452e+02 4.789e+02 5.834e+02 7.042e+02 2.257e+03, threshold=1.167e+03, percent-clipped=4.0 2023-03-08 17:55:44,922 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8879, 5.9523, 5.2284, 5.7278, 5.0612, 5.7359, 6.0197, 5.8167], device='cuda:3'), covar=tensor([0.1607, 0.0573, 0.0566, 0.0809, 0.2052, 0.0700, 0.0619, 0.0656], device='cuda:3'), in_proj_covar=tensor([0.0361, 0.0302, 0.0228, 0.0324, 0.0456, 0.0320, 0.0350, 0.0295], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0005, 0.0004, 0.0004, 0.0003], device='cuda:3') 2023-03-08 17:55:47,987 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.64 vs. limit=5.0 2023-03-08 17:55:59,504 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=13150.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 17:56:02,803 INFO [train.py:898] (3/4) Epoch 4, batch 2250, loss[loss=0.2515, simple_loss=0.3208, pruned_loss=0.09113, over 18500.00 frames. ], tot_loss[loss=0.2525, simple_loss=0.3228, pruned_loss=0.09107, over 3564419.76 frames. ], batch size: 51, lr: 2.60e-02, grad_scale: 8.0 2023-03-08 17:56:26,649 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-08 17:57:01,372 INFO [train.py:898] (3/4) Epoch 4, batch 2300, loss[loss=0.2851, simple_loss=0.3481, pruned_loss=0.1111, over 18054.00 frames. ], tot_loss[loss=0.2528, simple_loss=0.3234, pruned_loss=0.09112, over 3565468.43 frames. ], batch size: 65, lr: 2.59e-02, grad_scale: 8.0 2023-03-08 17:57:12,522 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9261, 2.9482, 4.1193, 4.0833, 2.4009, 4.7257, 4.2182, 3.2150], device='cuda:3'), covar=tensor([0.0196, 0.0786, 0.0161, 0.0138, 0.1265, 0.0077, 0.0215, 0.0780], device='cuda:3'), in_proj_covar=tensor([0.0124, 0.0153, 0.0088, 0.0099, 0.0170, 0.0108, 0.0118, 0.0161], device='cuda:3'), out_proj_covar=tensor([1.2090e-04, 1.4958e-04, 9.2083e-05, 9.5633e-05, 1.6383e-04, 1.0163e-04, 1.2519e-04, 1.6225e-04], device='cuda:3') 2023-03-08 17:57:23,442 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-08 17:57:24,771 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.647e+02 4.949e+02 6.048e+02 7.391e+02 1.554e+03, threshold=1.210e+03, percent-clipped=6.0 2023-03-08 17:57:36,440 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3410, 4.9493, 5.3905, 5.3799, 5.0633, 5.9346, 5.5842, 5.3684], device='cuda:3'), covar=tensor([0.0607, 0.0511, 0.0529, 0.0420, 0.1073, 0.0587, 0.0473, 0.1129], device='cuda:3'), in_proj_covar=tensor([0.0222, 0.0165, 0.0173, 0.0166, 0.0219, 0.0253, 0.0163, 0.0235], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 17:57:39,137 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-08 17:58:00,617 INFO [train.py:898] (3/4) Epoch 4, batch 2350, loss[loss=0.21, simple_loss=0.278, pruned_loss=0.07103, over 18405.00 frames. ], tot_loss[loss=0.2511, simple_loss=0.322, pruned_loss=0.09013, over 3577987.99 frames. ], batch size: 42, lr: 2.59e-02, grad_scale: 8.0 2023-03-08 17:58:15,352 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3058, 5.8459, 5.3380, 5.6625, 5.1903, 5.4853, 5.9523, 5.8147], device='cuda:3'), covar=tensor([0.1087, 0.0597, 0.0423, 0.0603, 0.1710, 0.0581, 0.0484, 0.0562], device='cuda:3'), in_proj_covar=tensor([0.0368, 0.0308, 0.0232, 0.0332, 0.0463, 0.0327, 0.0352, 0.0299], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0004, 0.0003], device='cuda:3') 2023-03-08 17:58:58,984 INFO [train.py:898] (3/4) Epoch 4, batch 2400, loss[loss=0.2558, simple_loss=0.3155, pruned_loss=0.09805, over 18498.00 frames. ], tot_loss[loss=0.2503, simple_loss=0.3209, pruned_loss=0.08984, over 3577610.97 frames. ], batch size: 47, lr: 2.58e-02, grad_scale: 8.0 2023-03-08 17:59:20,232 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.08 vs. limit=5.0 2023-03-08 17:59:23,023 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.316e+02 4.704e+02 5.884e+02 7.120e+02 1.441e+03, threshold=1.177e+03, percent-clipped=2.0 2023-03-08 17:59:58,169 INFO [train.py:898] (3/4) Epoch 4, batch 2450, loss[loss=0.2481, simple_loss=0.3227, pruned_loss=0.08681, over 17956.00 frames. ], tot_loss[loss=0.2493, simple_loss=0.3199, pruned_loss=0.08932, over 3581349.15 frames. ], batch size: 65, lr: 2.58e-02, grad_scale: 8.0 2023-03-08 18:00:56,434 INFO [train.py:898] (3/4) Epoch 4, batch 2500, loss[loss=0.2164, simple_loss=0.2859, pruned_loss=0.07343, over 18390.00 frames. ], tot_loss[loss=0.2494, simple_loss=0.3202, pruned_loss=0.08929, over 3593241.41 frames. ], batch size: 42, lr: 2.58e-02, grad_scale: 8.0 2023-03-08 18:01:16,521 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=13419.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:01:20,752 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.373e+02 5.229e+02 6.294e+02 7.709e+02 1.447e+03, threshold=1.259e+03, percent-clipped=4.0 2023-03-08 18:01:55,117 INFO [train.py:898] (3/4) Epoch 4, batch 2550, loss[loss=0.266, simple_loss=0.3416, pruned_loss=0.09524, over 18646.00 frames. ], tot_loss[loss=0.2511, simple_loss=0.3215, pruned_loss=0.09036, over 3570575.05 frames. ], batch size: 52, lr: 2.57e-02, grad_scale: 8.0 2023-03-08 18:02:12,159 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=13467.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:02:35,381 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3483, 2.4148, 2.2521, 2.7598, 3.1607, 3.1791, 2.5785, 2.8327], device='cuda:3'), covar=tensor([0.0226, 0.0384, 0.0853, 0.0357, 0.0301, 0.0184, 0.0512, 0.0280], device='cuda:3'), in_proj_covar=tensor([0.0083, 0.0065, 0.0121, 0.0092, 0.0067, 0.0051, 0.0082, 0.0078], device='cuda:3'), out_proj_covar=tensor([1.6110e-04, 1.2907e-04, 2.1968e-04, 1.7018e-04, 1.3369e-04, 9.2471e-05, 1.5856e-04, 1.4914e-04], device='cuda:3') 2023-03-08 18:02:48,773 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3613, 5.3331, 4.7627, 5.2673, 5.3638, 4.6168, 5.1992, 4.8601], device='cuda:3'), covar=tensor([0.0323, 0.0304, 0.1404, 0.0543, 0.0278, 0.0430, 0.0307, 0.0606], device='cuda:3'), in_proj_covar=tensor([0.0274, 0.0298, 0.0451, 0.0240, 0.0225, 0.0275, 0.0298, 0.0359], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0004, 0.0003, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-08 18:02:53,611 INFO [train.py:898] (3/4) Epoch 4, batch 2600, loss[loss=0.2163, simple_loss=0.2869, pruned_loss=0.07283, over 18487.00 frames. ], tot_loss[loss=0.2504, simple_loss=0.3207, pruned_loss=0.09007, over 3582007.87 frames. ], batch size: 44, lr: 2.57e-02, grad_scale: 8.0 2023-03-08 18:03:17,313 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.093e+02 4.722e+02 5.886e+02 7.226e+02 1.343e+03, threshold=1.177e+03, percent-clipped=1.0 2023-03-08 18:03:51,832 INFO [train.py:898] (3/4) Epoch 4, batch 2650, loss[loss=0.2348, simple_loss=0.3057, pruned_loss=0.08197, over 18159.00 frames. ], tot_loss[loss=0.2501, simple_loss=0.3209, pruned_loss=0.08967, over 3595399.10 frames. ], batch size: 44, lr: 2.56e-02, grad_scale: 8.0 2023-03-08 18:04:50,560 INFO [train.py:898] (3/4) Epoch 4, batch 2700, loss[loss=0.2023, simple_loss=0.2759, pruned_loss=0.0643, over 18430.00 frames. ], tot_loss[loss=0.2496, simple_loss=0.3208, pruned_loss=0.08916, over 3589795.69 frames. ], batch size: 43, lr: 2.56e-02, grad_scale: 8.0 2023-03-08 18:05:14,649 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.306e+02 5.163e+02 6.316e+02 7.971e+02 1.590e+03, threshold=1.263e+03, percent-clipped=4.0 2023-03-08 18:05:48,660 INFO [train.py:898] (3/4) Epoch 4, batch 2750, loss[loss=0.231, simple_loss=0.3006, pruned_loss=0.08076, over 18244.00 frames. ], tot_loss[loss=0.2492, simple_loss=0.3205, pruned_loss=0.08897, over 3578303.87 frames. ], batch size: 45, lr: 2.55e-02, grad_scale: 8.0 2023-03-08 18:06:32,542 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=5.05 vs. limit=5.0 2023-03-08 18:06:47,205 INFO [train.py:898] (3/4) Epoch 4, batch 2800, loss[loss=0.2189, simple_loss=0.2776, pruned_loss=0.08011, over 17620.00 frames. ], tot_loss[loss=0.2486, simple_loss=0.3199, pruned_loss=0.08864, over 3586225.24 frames. ], batch size: 39, lr: 2.55e-02, grad_scale: 8.0 2023-03-08 18:07:11,543 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.506e+02 4.982e+02 6.101e+02 7.616e+02 1.473e+03, threshold=1.220e+03, percent-clipped=5.0 2023-03-08 18:07:13,099 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4881, 2.6481, 3.7537, 3.6550, 2.5286, 4.2062, 3.6738, 2.5272], device='cuda:3'), covar=tensor([0.0280, 0.0839, 0.0134, 0.0163, 0.1231, 0.0084, 0.0290, 0.0818], device='cuda:3'), in_proj_covar=tensor([0.0132, 0.0161, 0.0090, 0.0101, 0.0176, 0.0116, 0.0128, 0.0166], device='cuda:3'), out_proj_covar=tensor([1.2742e-04, 1.5728e-04, 9.2958e-05, 9.8584e-05, 1.6915e-04, 1.0932e-04, 1.3427e-04, 1.6696e-04], device='cuda:3') 2023-03-08 18:07:28,178 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=13737.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:07:45,678 INFO [train.py:898] (3/4) Epoch 4, batch 2850, loss[loss=0.2111, simple_loss=0.2869, pruned_loss=0.06763, over 18370.00 frames. ], tot_loss[loss=0.2496, simple_loss=0.321, pruned_loss=0.08905, over 3590093.53 frames. ], batch size: 42, lr: 2.55e-02, grad_scale: 8.0 2023-03-08 18:08:25,587 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9604, 4.8909, 4.9828, 4.7604, 4.7762, 4.8243, 5.1465, 5.1657], device='cuda:3'), covar=tensor([0.0059, 0.0057, 0.0058, 0.0074, 0.0077, 0.0095, 0.0067, 0.0086], device='cuda:3'), in_proj_covar=tensor([0.0063, 0.0046, 0.0046, 0.0061, 0.0052, 0.0068, 0.0056, 0.0055], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002], device='cuda:3') 2023-03-08 18:08:29,839 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.99 vs. limit=2.0 2023-03-08 18:08:38,575 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=13798.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 18:08:43,998 INFO [train.py:898] (3/4) Epoch 4, batch 2900, loss[loss=0.2201, simple_loss=0.2818, pruned_loss=0.0792, over 18390.00 frames. ], tot_loss[loss=0.2497, simple_loss=0.3209, pruned_loss=0.08924, over 3588222.97 frames. ], batch size: 42, lr: 2.54e-02, grad_scale: 8.0 2023-03-08 18:09:07,288 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.361e+02 5.191e+02 6.530e+02 8.692e+02 2.187e+03, threshold=1.306e+03, percent-clipped=7.0 2023-03-08 18:09:18,144 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.98 vs. limit=2.0 2023-03-08 18:09:33,817 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.82 vs. limit=2.0 2023-03-08 18:09:43,213 INFO [train.py:898] (3/4) Epoch 4, batch 2950, loss[loss=0.2301, simple_loss=0.2919, pruned_loss=0.08411, over 18138.00 frames. ], tot_loss[loss=0.249, simple_loss=0.3204, pruned_loss=0.08881, over 3579600.50 frames. ], batch size: 44, lr: 2.54e-02, grad_scale: 8.0 2023-03-08 18:09:45,638 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4275, 5.1114, 5.4891, 5.2156, 5.1165, 5.9960, 5.5698, 5.3316], device='cuda:3'), covar=tensor([0.0694, 0.0542, 0.0582, 0.0503, 0.1451, 0.0687, 0.0588, 0.1434], device='cuda:3'), in_proj_covar=tensor([0.0226, 0.0167, 0.0171, 0.0169, 0.0217, 0.0249, 0.0160, 0.0236], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 18:10:08,548 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4465, 5.3374, 2.7549, 5.1373, 5.1100, 5.4203, 5.1465, 2.4176], device='cuda:3'), covar=tensor([0.0127, 0.0071, 0.0900, 0.0086, 0.0078, 0.0096, 0.0134, 0.1474], device='cuda:3'), in_proj_covar=tensor([0.0062, 0.0049, 0.0081, 0.0057, 0.0056, 0.0048, 0.0061, 0.0086], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0004, 0.0003, 0.0003, 0.0002, 0.0003, 0.0004], device='cuda:3') 2023-03-08 18:10:22,938 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.64 vs. limit=2.0 2023-03-08 18:10:40,648 INFO [train.py:898] (3/4) Epoch 4, batch 3000, loss[loss=0.2303, simple_loss=0.2987, pruned_loss=0.08095, over 18383.00 frames. ], tot_loss[loss=0.2487, simple_loss=0.3203, pruned_loss=0.08852, over 3589982.34 frames. ], batch size: 46, lr: 2.53e-02, grad_scale: 4.0 2023-03-08 18:10:40,648 INFO [train.py:923] (3/4) Computing validation loss 2023-03-08 18:10:52,597 INFO [train.py:932] (3/4) Epoch 4, validation: loss=0.1898, simple_loss=0.292, pruned_loss=0.04378, over 944034.00 frames. 2023-03-08 18:10:52,598 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-08 18:11:17,314 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.366e+02 5.126e+02 6.271e+02 8.521e+02 2.590e+03, threshold=1.254e+03, percent-clipped=11.0 2023-03-08 18:11:19,154 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.84 vs. limit=2.0 2023-03-08 18:11:50,803 INFO [train.py:898] (3/4) Epoch 4, batch 3050, loss[loss=0.2244, simple_loss=0.2969, pruned_loss=0.076, over 18356.00 frames. ], tot_loss[loss=0.2488, simple_loss=0.3205, pruned_loss=0.08857, over 3592804.89 frames. ], batch size: 46, lr: 2.53e-02, grad_scale: 4.0 2023-03-08 18:12:17,723 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4672, 4.0983, 4.4144, 3.4987, 3.2056, 3.3621, 2.1913, 1.6926], device='cuda:3'), covar=tensor([0.0225, 0.0169, 0.0044, 0.0172, 0.0372, 0.0160, 0.0776, 0.1042], device='cuda:3'), in_proj_covar=tensor([0.0035, 0.0035, 0.0027, 0.0040, 0.0057, 0.0031, 0.0057, 0.0062], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-08 18:12:53,755 INFO [train.py:898] (3/4) Epoch 4, batch 3100, loss[loss=0.2293, simple_loss=0.3055, pruned_loss=0.07657, over 18273.00 frames. ], tot_loss[loss=0.2474, simple_loss=0.3193, pruned_loss=0.08777, over 3596913.14 frames. ], batch size: 47, lr: 2.53e-02, grad_scale: 4.0 2023-03-08 18:13:18,983 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.897e+02 4.672e+02 5.881e+02 7.283e+02 1.291e+03, threshold=1.176e+03, percent-clipped=1.0 2023-03-08 18:13:39,364 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.2566, 3.7714, 5.0300, 4.1069, 2.9226, 2.5827, 4.4029, 5.1132], device='cuda:3'), covar=tensor([0.1050, 0.1038, 0.0041, 0.0257, 0.0880, 0.1141, 0.0263, 0.0028], device='cuda:3'), in_proj_covar=tensor([0.0131, 0.0143, 0.0062, 0.0125, 0.0151, 0.0159, 0.0126, 0.0065], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0003, 0.0002, 0.0001], device='cuda:3') 2023-03-08 18:13:42,628 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7550, 2.7787, 3.9782, 4.0807, 2.2893, 4.5063, 3.9325, 2.7541], device='cuda:3'), covar=tensor([0.0269, 0.0946, 0.0197, 0.0171, 0.1460, 0.0098, 0.0251, 0.0924], device='cuda:3'), in_proj_covar=tensor([0.0129, 0.0160, 0.0089, 0.0102, 0.0177, 0.0118, 0.0131, 0.0169], device='cuda:3'), out_proj_covar=tensor([1.2537e-04, 1.5701e-04, 9.1599e-05, 9.8521e-05, 1.6976e-04, 1.1166e-04, 1.3689e-04, 1.6945e-04], device='cuda:3') 2023-03-08 18:13:44,284 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.69 vs. limit=2.0 2023-03-08 18:13:52,238 INFO [train.py:898] (3/4) Epoch 4, batch 3150, loss[loss=0.3381, simple_loss=0.3859, pruned_loss=0.1451, over 12525.00 frames. ], tot_loss[loss=0.2466, simple_loss=0.3188, pruned_loss=0.0872, over 3595187.35 frames. ], batch size: 130, lr: 2.52e-02, grad_scale: 4.0 2023-03-08 18:14:15,427 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=14072.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:14:23,847 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4615, 2.5275, 2.2861, 2.8662, 3.2714, 3.4354, 2.6539, 2.9586], device='cuda:3'), covar=tensor([0.0352, 0.0426, 0.0874, 0.0426, 0.0342, 0.0181, 0.0527, 0.0312], device='cuda:3'), in_proj_covar=tensor([0.0090, 0.0067, 0.0124, 0.0091, 0.0072, 0.0054, 0.0084, 0.0085], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 18:14:40,064 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=14093.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 18:14:51,080 INFO [train.py:898] (3/4) Epoch 4, batch 3200, loss[loss=0.2461, simple_loss=0.324, pruned_loss=0.08409, over 18337.00 frames. ], tot_loss[loss=0.2467, simple_loss=0.3189, pruned_loss=0.0873, over 3587471.62 frames. ], batch size: 54, lr: 2.52e-02, grad_scale: 8.0 2023-03-08 18:14:51,404 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=14103.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:15:10,627 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=14120.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 18:15:14,751 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.167e+02 5.012e+02 5.863e+02 7.076e+02 1.938e+03, threshold=1.173e+03, percent-clipped=6.0 2023-03-08 18:15:26,351 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=14133.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:15:49,264 INFO [train.py:898] (3/4) Epoch 4, batch 3250, loss[loss=0.2318, simple_loss=0.3113, pruned_loss=0.0761, over 18398.00 frames. ], tot_loss[loss=0.2472, simple_loss=0.3192, pruned_loss=0.08762, over 3567607.90 frames. ], batch size: 50, lr: 2.51e-02, grad_scale: 8.0 2023-03-08 18:15:54,021 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3647, 5.3246, 4.7594, 5.3253, 5.2754, 4.7097, 5.2519, 4.8913], device='cuda:3'), covar=tensor([0.0404, 0.0390, 0.1775, 0.0660, 0.0461, 0.0410, 0.0374, 0.0653], device='cuda:3'), in_proj_covar=tensor([0.0279, 0.0305, 0.0460, 0.0248, 0.0235, 0.0280, 0.0301, 0.0372], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0004, 0.0003, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-08 18:16:01,804 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=14164.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:16:21,784 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.48 vs. limit=2.0 2023-03-08 18:16:22,279 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=14181.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 18:16:47,517 INFO [train.py:898] (3/4) Epoch 4, batch 3300, loss[loss=0.2564, simple_loss=0.3312, pruned_loss=0.09082, over 18328.00 frames. ], tot_loss[loss=0.247, simple_loss=0.3189, pruned_loss=0.0876, over 3562214.38 frames. ], batch size: 54, lr: 2.51e-02, grad_scale: 8.0 2023-03-08 18:17:10,858 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.117e+02 4.913e+02 6.024e+02 7.201e+02 1.529e+03, threshold=1.205e+03, percent-clipped=5.0 2023-03-08 18:17:45,356 INFO [train.py:898] (3/4) Epoch 4, batch 3350, loss[loss=0.2431, simple_loss=0.3241, pruned_loss=0.08112, over 18407.00 frames. ], tot_loss[loss=0.2461, simple_loss=0.318, pruned_loss=0.08705, over 3570692.11 frames. ], batch size: 52, lr: 2.51e-02, grad_scale: 8.0 2023-03-08 18:17:52,026 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3038, 2.5833, 2.3239, 2.7892, 3.0833, 3.3696, 2.7618, 2.6943], device='cuda:3'), covar=tensor([0.0336, 0.0299, 0.0984, 0.0419, 0.0318, 0.0210, 0.0413, 0.0310], device='cuda:3'), in_proj_covar=tensor([0.0096, 0.0069, 0.0126, 0.0094, 0.0075, 0.0055, 0.0085, 0.0088], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 18:18:00,087 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.96 vs. limit=2.0 2023-03-08 18:18:04,363 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6367, 4.1968, 4.3437, 3.3858, 3.3133, 3.3232, 1.8226, 1.9904], device='cuda:3'), covar=tensor([0.0175, 0.0210, 0.0045, 0.0249, 0.0367, 0.0257, 0.0918, 0.0990], device='cuda:3'), in_proj_covar=tensor([0.0035, 0.0035, 0.0029, 0.0042, 0.0059, 0.0032, 0.0058, 0.0063], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0004, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-08 18:18:44,674 INFO [train.py:898] (3/4) Epoch 4, batch 3400, loss[loss=0.2376, simple_loss=0.3025, pruned_loss=0.08635, over 18505.00 frames. ], tot_loss[loss=0.2466, simple_loss=0.3184, pruned_loss=0.08739, over 3561237.83 frames. ], batch size: 47, lr: 2.50e-02, grad_scale: 8.0 2023-03-08 18:19:08,456 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.677e+02 5.094e+02 5.876e+02 7.439e+02 2.634e+03, threshold=1.175e+03, percent-clipped=7.0 2023-03-08 18:19:42,052 INFO [train.py:898] (3/4) Epoch 4, batch 3450, loss[loss=0.2505, simple_loss=0.3303, pruned_loss=0.0854, over 18487.00 frames. ], tot_loss[loss=0.2466, simple_loss=0.3183, pruned_loss=0.08744, over 3566049.69 frames. ], batch size: 53, lr: 2.50e-02, grad_scale: 4.0 2023-03-08 18:19:45,695 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7018, 5.1434, 5.3611, 5.0591, 4.8703, 5.1424, 4.2387, 5.0256], device='cuda:3'), covar=tensor([0.0225, 0.0359, 0.0152, 0.0228, 0.0369, 0.0234, 0.1312, 0.0234], device='cuda:3'), in_proj_covar=tensor([0.0118, 0.0158, 0.0136, 0.0129, 0.0151, 0.0156, 0.0228, 0.0139], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005, 0.0005, 0.0004], device='cuda:3') 2023-03-08 18:20:29,302 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=14393.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:20:40,693 INFO [train.py:898] (3/4) Epoch 4, batch 3500, loss[loss=0.2272, simple_loss=0.3033, pruned_loss=0.07554, over 18305.00 frames. ], tot_loss[loss=0.2471, simple_loss=0.3188, pruned_loss=0.08773, over 3567419.32 frames. ], batch size: 54, lr: 2.49e-02, grad_scale: 4.0 2023-03-08 18:20:52,873 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=14413.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:21:05,562 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.205e+02 4.879e+02 5.762e+02 7.140e+02 1.904e+03, threshold=1.152e+03, percent-clipped=3.0 2023-03-08 18:21:05,960 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3602, 3.0392, 2.8420, 2.5769, 2.9615, 2.4252, 2.4871, 3.2231], device='cuda:3'), covar=tensor([0.0031, 0.0059, 0.0105, 0.0119, 0.0080, 0.0152, 0.0163, 0.0070], device='cuda:3'), in_proj_covar=tensor([0.0050, 0.0064, 0.0060, 0.0090, 0.0057, 0.0092, 0.0104, 0.0052], device='cuda:3'), out_proj_covar=tensor([7.6811e-05, 1.0305e-04, 9.9365e-05, 1.4550e-04, 8.8147e-05, 1.4628e-04, 1.6802e-04, 8.4148e-05], device='cuda:3') 2023-03-08 18:21:09,096 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=14428.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:21:22,859 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=14441.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:21:35,660 INFO [train.py:898] (3/4) Epoch 4, batch 3550, loss[loss=0.2255, simple_loss=0.3033, pruned_loss=0.07383, over 18543.00 frames. ], tot_loss[loss=0.2463, simple_loss=0.3183, pruned_loss=0.08716, over 3577787.19 frames. ], batch size: 49, lr: 2.49e-02, grad_scale: 4.0 2023-03-08 18:21:42,311 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=14459.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:21:58,344 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=14474.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:22:00,249 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=14476.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 18:22:13,143 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=14487.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:22:30,850 INFO [train.py:898] (3/4) Epoch 4, batch 3600, loss[loss=0.2333, simple_loss=0.3036, pruned_loss=0.08149, over 18475.00 frames. ], tot_loss[loss=0.2461, simple_loss=0.3182, pruned_loss=0.08696, over 3578866.61 frames. ], batch size: 44, lr: 2.49e-02, grad_scale: 8.0 2023-03-08 18:22:45,799 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.36 vs. limit=5.0 2023-03-08 18:22:53,543 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.017e+02 5.164e+02 6.546e+02 8.016e+02 1.916e+03, threshold=1.309e+03, percent-clipped=5.0 2023-03-08 18:22:58,324 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.3241, 4.6355, 4.4045, 4.5660, 4.2484, 4.5390, 4.6931, 4.6029], device='cuda:3'), covar=tensor([0.1004, 0.0604, 0.1533, 0.0676, 0.1507, 0.0466, 0.0561, 0.0618], device='cuda:3'), in_proj_covar=tensor([0.0362, 0.0298, 0.0227, 0.0323, 0.0450, 0.0319, 0.0347, 0.0294], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0005, 0.0004, 0.0004, 0.0003], device='cuda:3') 2023-03-08 18:23:35,182 INFO [train.py:898] (3/4) Epoch 5, batch 0, loss[loss=0.2335, simple_loss=0.3, pruned_loss=0.08351, over 18359.00 frames. ], tot_loss[loss=0.2335, simple_loss=0.3, pruned_loss=0.08351, over 18359.00 frames. ], batch size: 46, lr: 2.31e-02, grad_scale: 8.0 2023-03-08 18:23:35,182 INFO [train.py:923] (3/4) Computing validation loss 2023-03-08 18:23:46,756 INFO [train.py:932] (3/4) Epoch 5, validation: loss=0.1908, simple_loss=0.2926, pruned_loss=0.04454, over 944034.00 frames. 2023-03-08 18:23:46,757 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-08 18:23:59,427 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=14548.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:24:00,938 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.88 vs. limit=5.0 2023-03-08 18:24:12,413 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4985, 5.0860, 5.1345, 4.9825, 4.7013, 5.0320, 4.1933, 4.8840], device='cuda:3'), covar=tensor([0.0260, 0.0300, 0.0208, 0.0217, 0.0394, 0.0235, 0.1279, 0.0268], device='cuda:3'), in_proj_covar=tensor([0.0116, 0.0153, 0.0135, 0.0124, 0.0146, 0.0150, 0.0217, 0.0136], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-08 18:24:44,282 INFO [train.py:898] (3/4) Epoch 5, batch 50, loss[loss=0.197, simple_loss=0.2789, pruned_loss=0.05754, over 18278.00 frames. ], tot_loss[loss=0.2452, simple_loss=0.3184, pruned_loss=0.08604, over 813088.87 frames. ], batch size: 49, lr: 2.31e-02, grad_scale: 8.0 2023-03-08 18:24:51,410 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9300, 4.5674, 4.7927, 3.9906, 3.5048, 3.5443, 2.0864, 2.3331], device='cuda:3'), covar=tensor([0.0163, 0.0206, 0.0025, 0.0148, 0.0330, 0.0223, 0.0805, 0.0873], device='cuda:3'), in_proj_covar=tensor([0.0035, 0.0035, 0.0028, 0.0040, 0.0059, 0.0032, 0.0059, 0.0065], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0004, 0.0002, 0.0003, 0.0004], device='cuda:3') 2023-03-08 18:25:22,512 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4643, 5.2782, 5.5464, 5.5126, 5.2837, 6.1109, 5.6295, 5.4694], device='cuda:3'), covar=tensor([0.0808, 0.0547, 0.0562, 0.0506, 0.1599, 0.0702, 0.0605, 0.1627], device='cuda:3'), in_proj_covar=tensor([0.0232, 0.0173, 0.0176, 0.0173, 0.0219, 0.0251, 0.0165, 0.0246], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 18:25:29,118 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.345e+02 4.923e+02 5.805e+02 7.432e+02 1.503e+03, threshold=1.161e+03, percent-clipped=2.0 2023-03-08 18:25:43,379 INFO [train.py:898] (3/4) Epoch 5, batch 100, loss[loss=0.2151, simple_loss=0.2831, pruned_loss=0.07354, over 17660.00 frames. ], tot_loss[loss=0.241, simple_loss=0.3142, pruned_loss=0.08391, over 1431292.84 frames. ], batch size: 39, lr: 2.31e-02, grad_scale: 8.0 2023-03-08 18:26:41,846 INFO [train.py:898] (3/4) Epoch 5, batch 150, loss[loss=0.2748, simple_loss=0.3419, pruned_loss=0.1039, over 18361.00 frames. ], tot_loss[loss=0.2371, simple_loss=0.3106, pruned_loss=0.08182, over 1920020.06 frames. ], batch size: 55, lr: 2.30e-02, grad_scale: 8.0 2023-03-08 18:27:01,413 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9851, 4.7690, 4.9956, 4.8117, 4.5721, 4.8497, 5.2787, 5.2018], device='cuda:3'), covar=tensor([0.0049, 0.0083, 0.0063, 0.0067, 0.0077, 0.0094, 0.0068, 0.0075], device='cuda:3'), in_proj_covar=tensor([0.0061, 0.0048, 0.0046, 0.0059, 0.0051, 0.0067, 0.0057, 0.0054], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-08 18:27:26,966 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.743e+02 4.388e+02 5.420e+02 7.125e+02 1.692e+03, threshold=1.084e+03, percent-clipped=3.0 2023-03-08 18:27:30,853 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=14728.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:27:40,872 INFO [train.py:898] (3/4) Epoch 5, batch 200, loss[loss=0.2398, simple_loss=0.3199, pruned_loss=0.07984, over 17835.00 frames. ], tot_loss[loss=0.238, simple_loss=0.3113, pruned_loss=0.08234, over 2284197.55 frames. ], batch size: 70, lr: 2.30e-02, grad_scale: 8.0 2023-03-08 18:27:45,148 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=14740.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:27:56,581 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8187, 4.8757, 4.9929, 4.7862, 4.6713, 4.8585, 5.2080, 5.2034], device='cuda:3'), covar=tensor([0.0053, 0.0065, 0.0059, 0.0065, 0.0065, 0.0084, 0.0058, 0.0078], device='cuda:3'), in_proj_covar=tensor([0.0059, 0.0047, 0.0044, 0.0057, 0.0049, 0.0065, 0.0055, 0.0052], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002], device='cuda:3') 2023-03-08 18:28:06,686 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=14759.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:28:08,241 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.99 vs. limit=2.0 2023-03-08 18:28:17,910 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=14769.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:28:27,523 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=14776.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:28:27,593 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=14776.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 18:28:39,575 INFO [train.py:898] (3/4) Epoch 5, batch 250, loss[loss=0.2471, simple_loss=0.3073, pruned_loss=0.09343, over 18256.00 frames. ], tot_loss[loss=0.2385, simple_loss=0.3118, pruned_loss=0.08258, over 2585025.41 frames. ], batch size: 47, lr: 2.30e-02, grad_scale: 8.0 2023-03-08 18:28:56,747 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=14801.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:29:03,303 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=14807.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:29:05,906 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.2282, 3.9090, 5.1929, 4.3750, 3.1219, 2.9083, 4.5766, 5.2323], device='cuda:3'), covar=tensor([0.0900, 0.1059, 0.0038, 0.0226, 0.0771, 0.0946, 0.0216, 0.0050], device='cuda:3'), in_proj_covar=tensor([0.0127, 0.0148, 0.0062, 0.0125, 0.0154, 0.0155, 0.0128, 0.0066], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-08 18:29:22,484 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=14824.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 18:29:23,283 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.926e+02 4.780e+02 5.936e+02 7.141e+02 1.719e+03, threshold=1.187e+03, percent-clipped=5.0 2023-03-08 18:29:37,973 INFO [train.py:898] (3/4) Epoch 5, batch 300, loss[loss=0.2405, simple_loss=0.3167, pruned_loss=0.08218, over 18367.00 frames. ], tot_loss[loss=0.2405, simple_loss=0.3137, pruned_loss=0.08366, over 2805392.59 frames. ], batch size: 56, lr: 2.29e-02, grad_scale: 8.0 2023-03-08 18:29:39,519 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.3367, 5.5088, 3.0734, 5.1656, 5.1422, 5.5481, 5.3973, 2.9531], device='cuda:3'), covar=tensor([0.0129, 0.0032, 0.0566, 0.0052, 0.0053, 0.0031, 0.0064, 0.0762], device='cuda:3'), in_proj_covar=tensor([0.0062, 0.0048, 0.0077, 0.0059, 0.0055, 0.0046, 0.0061, 0.0084], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0004, 0.0003, 0.0003, 0.0002, 0.0003, 0.0004], device='cuda:3') 2023-03-08 18:29:45,113 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=14843.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:30:35,693 INFO [train.py:898] (3/4) Epoch 5, batch 350, loss[loss=0.2397, simple_loss=0.3053, pruned_loss=0.08708, over 18545.00 frames. ], tot_loss[loss=0.2414, simple_loss=0.3145, pruned_loss=0.08419, over 2976008.60 frames. ], batch size: 49, lr: 2.29e-02, grad_scale: 8.0 2023-03-08 18:30:50,010 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-08 18:31:06,599 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4616, 5.4479, 2.9141, 5.2359, 5.0599, 5.5007, 5.2993, 2.4607], device='cuda:3'), covar=tensor([0.0111, 0.0070, 0.0698, 0.0060, 0.0080, 0.0069, 0.0110, 0.1321], device='cuda:3'), in_proj_covar=tensor([0.0061, 0.0048, 0.0077, 0.0058, 0.0055, 0.0046, 0.0061, 0.0083], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0004, 0.0003, 0.0003, 0.0002, 0.0003, 0.0004], device='cuda:3') 2023-03-08 18:31:07,070 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=2.04 vs. limit=2.0 2023-03-08 18:31:19,828 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.54 vs. limit=2.0 2023-03-08 18:31:20,044 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.949e+02 4.422e+02 5.424e+02 6.713e+02 1.260e+03, threshold=1.085e+03, percent-clipped=2.0 2023-03-08 18:31:28,373 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.93 vs. limit=5.0 2023-03-08 18:31:29,834 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5084, 5.4356, 2.9654, 5.1941, 5.0723, 5.5219, 5.3823, 2.9096], device='cuda:3'), covar=tensor([0.0107, 0.0053, 0.0592, 0.0052, 0.0079, 0.0048, 0.0072, 0.0882], device='cuda:3'), in_proj_covar=tensor([0.0061, 0.0048, 0.0077, 0.0058, 0.0055, 0.0046, 0.0061, 0.0083], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0004, 0.0003, 0.0003, 0.0002, 0.0003, 0.0004], device='cuda:3') 2023-03-08 18:31:34,506 INFO [train.py:898] (3/4) Epoch 5, batch 400, loss[loss=0.1944, simple_loss=0.2699, pruned_loss=0.05947, over 18497.00 frames. ], tot_loss[loss=0.239, simple_loss=0.3127, pruned_loss=0.08264, over 3129012.50 frames. ], batch size: 47, lr: 2.29e-02, grad_scale: 8.0 2023-03-08 18:31:58,431 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.56 vs. limit=2.0 2023-03-08 18:32:00,750 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.41 vs. limit=2.0 2023-03-08 18:32:31,191 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2871, 1.8699, 3.6261, 3.0138, 3.6288, 5.0263, 4.1421, 4.3481], device='cuda:3'), covar=tensor([0.0289, 0.0854, 0.0446, 0.0493, 0.0832, 0.0025, 0.0228, 0.0137], device='cuda:3'), in_proj_covar=tensor([0.0140, 0.0184, 0.0135, 0.0175, 0.0266, 0.0099, 0.0153, 0.0130], device='cuda:3'), out_proj_covar=tensor([1.0902e-04, 1.4275e-04, 1.1474e-04, 1.2468e-04, 2.0412e-04, 6.9780e-05, 1.1827e-04, 9.9538e-05], device='cuda:3') 2023-03-08 18:32:31,717 INFO [train.py:898] (3/4) Epoch 5, batch 450, loss[loss=0.2506, simple_loss=0.3221, pruned_loss=0.08957, over 18626.00 frames. ], tot_loss[loss=0.2395, simple_loss=0.313, pruned_loss=0.083, over 3237134.31 frames. ], batch size: 52, lr: 2.28e-02, grad_scale: 8.0 2023-03-08 18:33:17,342 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.939e+02 4.662e+02 5.762e+02 7.845e+02 1.537e+03, threshold=1.152e+03, percent-clipped=9.0 2023-03-08 18:33:30,767 INFO [train.py:898] (3/4) Epoch 5, batch 500, loss[loss=0.2475, simple_loss=0.3244, pruned_loss=0.08534, over 17571.00 frames. ], tot_loss[loss=0.2385, simple_loss=0.312, pruned_loss=0.08247, over 3321999.06 frames. ], batch size: 70, lr: 2.28e-02, grad_scale: 8.0 2023-03-08 18:34:09,922 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=15069.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:34:30,388 INFO [train.py:898] (3/4) Epoch 5, batch 550, loss[loss=0.2665, simple_loss=0.34, pruned_loss=0.09649, over 18124.00 frames. ], tot_loss[loss=0.2391, simple_loss=0.3128, pruned_loss=0.08268, over 3378105.73 frames. ], batch size: 62, lr: 2.28e-02, grad_scale: 8.0 2023-03-08 18:34:41,330 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=15096.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:35:06,942 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=15117.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:35:15,839 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.759e+02 4.602e+02 5.527e+02 7.375e+02 1.442e+03, threshold=1.105e+03, percent-clipped=1.0 2023-03-08 18:35:29,476 INFO [train.py:898] (3/4) Epoch 5, batch 600, loss[loss=0.2511, simple_loss=0.3252, pruned_loss=0.08854, over 15991.00 frames. ], tot_loss[loss=0.2389, simple_loss=0.3128, pruned_loss=0.08251, over 3431338.01 frames. ], batch size: 94, lr: 2.27e-02, grad_scale: 8.0 2023-03-08 18:35:36,531 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=15143.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:36:11,866 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9011, 3.7461, 4.8597, 3.0593, 4.3088, 2.8979, 3.0210, 1.9741], device='cuda:3'), covar=tensor([0.0603, 0.0466, 0.0056, 0.0397, 0.0394, 0.1386, 0.1359, 0.1380], device='cuda:3'), in_proj_covar=tensor([0.0151, 0.0154, 0.0077, 0.0123, 0.0173, 0.0203, 0.0172, 0.0168], device='cuda:3'), out_proj_covar=tensor([1.4771e-04, 1.5630e-04, 8.1781e-05, 1.2329e-04, 1.7478e-04, 2.0105e-04, 1.8151e-04, 1.6875e-04], device='cuda:3') 2023-03-08 18:36:28,267 INFO [train.py:898] (3/4) Epoch 5, batch 650, loss[loss=0.2457, simple_loss=0.3202, pruned_loss=0.08557, over 15993.00 frames. ], tot_loss[loss=0.2393, simple_loss=0.3129, pruned_loss=0.08283, over 3456512.21 frames. ], batch size: 94, lr: 2.27e-02, grad_scale: 8.0 2023-03-08 18:36:32,835 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=15191.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:36:37,703 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=15195.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:36:50,551 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1155, 2.8620, 2.6707, 2.5144, 2.7769, 2.3571, 2.3556, 3.1231], device='cuda:3'), covar=tensor([0.0035, 0.0067, 0.0095, 0.0116, 0.0106, 0.0139, 0.0170, 0.0055], device='cuda:3'), in_proj_covar=tensor([0.0052, 0.0065, 0.0062, 0.0094, 0.0062, 0.0099, 0.0107, 0.0056], device='cuda:3'), out_proj_covar=tensor([7.9092e-05, 1.0392e-04, 1.0224e-04, 1.5397e-04, 9.6638e-05, 1.5826e-04, 1.7174e-04, 9.0074e-05], device='cuda:3') 2023-03-08 18:37:13,512 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.710e+02 4.789e+02 6.054e+02 7.320e+02 2.200e+03, threshold=1.211e+03, percent-clipped=6.0 2023-03-08 18:37:27,068 INFO [train.py:898] (3/4) Epoch 5, batch 700, loss[loss=0.2664, simple_loss=0.3384, pruned_loss=0.09715, over 17190.00 frames. ], tot_loss[loss=0.2404, simple_loss=0.3139, pruned_loss=0.08342, over 3475504.12 frames. ], batch size: 78, lr: 2.27e-02, grad_scale: 8.0 2023-03-08 18:37:50,287 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=15256.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 18:38:17,209 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.2868, 3.8796, 5.3138, 4.2914, 3.2001, 2.8067, 4.5253, 5.2129], device='cuda:3'), covar=tensor([0.1013, 0.1146, 0.0036, 0.0282, 0.0921, 0.1143, 0.0264, 0.0048], device='cuda:3'), in_proj_covar=tensor([0.0133, 0.0158, 0.0064, 0.0132, 0.0161, 0.0162, 0.0134, 0.0071], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0001, 0.0002, 0.0002, 0.0003, 0.0002, 0.0001], device='cuda:3') 2023-03-08 18:38:26,035 INFO [train.py:898] (3/4) Epoch 5, batch 750, loss[loss=0.1971, simple_loss=0.2638, pruned_loss=0.06524, over 17646.00 frames. ], tot_loss[loss=0.2405, simple_loss=0.3137, pruned_loss=0.08366, over 3488207.51 frames. ], batch size: 39, lr: 2.26e-02, grad_scale: 8.0 2023-03-08 18:38:35,668 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.74 vs. limit=5.0 2023-03-08 18:39:10,698 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.929e+02 4.559e+02 5.323e+02 7.039e+02 1.434e+03, threshold=1.065e+03, percent-clipped=2.0 2023-03-08 18:39:12,813 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6122, 2.5713, 3.9363, 3.9975, 2.3886, 4.2764, 3.8929, 2.6123], device='cuda:3'), covar=tensor([0.0239, 0.0929, 0.0176, 0.0130, 0.1278, 0.0100, 0.0200, 0.0870], device='cuda:3'), in_proj_covar=tensor([0.0133, 0.0170, 0.0094, 0.0100, 0.0178, 0.0119, 0.0133, 0.0171], device='cuda:3'), out_proj_covar=tensor([1.2864e-04, 1.6537e-04, 9.9082e-05, 9.6523e-05, 1.7070e-04, 1.1192e-04, 1.3799e-04, 1.7030e-04], device='cuda:3') 2023-03-08 18:39:24,935 INFO [train.py:898] (3/4) Epoch 5, batch 800, loss[loss=0.2187, simple_loss=0.291, pruned_loss=0.07321, over 17688.00 frames. ], tot_loss[loss=0.2397, simple_loss=0.3131, pruned_loss=0.08318, over 3502659.51 frames. ], batch size: 39, lr: 2.26e-02, grad_scale: 8.0 2023-03-08 18:40:24,762 INFO [train.py:898] (3/4) Epoch 5, batch 850, loss[loss=0.2647, simple_loss=0.3346, pruned_loss=0.09741, over 16890.00 frames. ], tot_loss[loss=0.2392, simple_loss=0.3129, pruned_loss=0.08272, over 3526515.69 frames. ], batch size: 78, lr: 2.26e-02, grad_scale: 8.0 2023-03-08 18:40:35,273 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=15396.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:41:09,840 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.891e+02 4.807e+02 5.667e+02 6.596e+02 1.929e+03, threshold=1.133e+03, percent-clipped=7.0 2023-03-08 18:41:24,189 INFO [train.py:898] (3/4) Epoch 5, batch 900, loss[loss=0.2775, simple_loss=0.3462, pruned_loss=0.1044, over 17093.00 frames. ], tot_loss[loss=0.2394, simple_loss=0.3128, pruned_loss=0.08304, over 3536846.37 frames. ], batch size: 78, lr: 2.25e-02, grad_scale: 8.0 2023-03-08 18:41:32,536 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=15444.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:42:23,787 INFO [train.py:898] (3/4) Epoch 5, batch 950, loss[loss=0.2254, simple_loss=0.3037, pruned_loss=0.07356, over 18339.00 frames. ], tot_loss[loss=0.2389, simple_loss=0.3125, pruned_loss=0.08272, over 3550862.03 frames. ], batch size: 55, lr: 2.25e-02, grad_scale: 8.0 2023-03-08 18:42:35,499 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.3374, 4.3835, 4.3831, 4.1682, 4.2738, 4.2648, 4.6394, 4.5575], device='cuda:3'), covar=tensor([0.0073, 0.0084, 0.0080, 0.0111, 0.0082, 0.0104, 0.0078, 0.0113], device='cuda:3'), in_proj_covar=tensor([0.0063, 0.0051, 0.0048, 0.0062, 0.0053, 0.0070, 0.0059, 0.0058], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-08 18:43:09,482 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.487e+02 4.499e+02 5.437e+02 6.753e+02 3.373e+03, threshold=1.087e+03, percent-clipped=5.0 2023-03-08 18:43:23,266 INFO [train.py:898] (3/4) Epoch 5, batch 1000, loss[loss=0.1997, simple_loss=0.2782, pruned_loss=0.06056, over 18516.00 frames. ], tot_loss[loss=0.2386, simple_loss=0.3127, pruned_loss=0.08227, over 3561968.82 frames. ], batch size: 44, lr: 2.25e-02, grad_scale: 8.0 2023-03-08 18:43:39,900 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=15551.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 18:44:23,079 INFO [train.py:898] (3/4) Epoch 5, batch 1050, loss[loss=0.2017, simple_loss=0.2777, pruned_loss=0.0629, over 18336.00 frames. ], tot_loss[loss=0.2377, simple_loss=0.3118, pruned_loss=0.08182, over 3567295.85 frames. ], batch size: 46, lr: 2.24e-02, grad_scale: 8.0 2023-03-08 18:44:31,156 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5028, 4.4075, 2.5499, 4.6107, 5.3846, 2.6614, 4.0405, 3.8115], device='cuda:3'), covar=tensor([0.0057, 0.1043, 0.1540, 0.0390, 0.0041, 0.1283, 0.0628, 0.0772], device='cuda:3'), in_proj_covar=tensor([0.0083, 0.0154, 0.0178, 0.0169, 0.0073, 0.0164, 0.0188, 0.0180], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 18:45:09,010 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.376e+02 4.268e+02 5.180e+02 6.022e+02 1.534e+03, threshold=1.036e+03, percent-clipped=2.0 2023-03-08 18:45:23,121 INFO [train.py:898] (3/4) Epoch 5, batch 1100, loss[loss=0.2956, simple_loss=0.357, pruned_loss=0.1171, over 18334.00 frames. ], tot_loss[loss=0.2379, simple_loss=0.3118, pruned_loss=0.08194, over 3571576.22 frames. ], batch size: 56, lr: 2.24e-02, grad_scale: 8.0 2023-03-08 18:45:31,362 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.6311, 5.1215, 5.5952, 5.5030, 5.4098, 6.1325, 5.6957, 5.4932], device='cuda:3'), covar=tensor([0.0562, 0.0473, 0.0482, 0.0479, 0.0987, 0.0591, 0.0492, 0.1061], device='cuda:3'), in_proj_covar=tensor([0.0236, 0.0174, 0.0184, 0.0181, 0.0219, 0.0268, 0.0167, 0.0254], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 18:46:22,717 INFO [train.py:898] (3/4) Epoch 5, batch 1150, loss[loss=0.2336, simple_loss=0.3138, pruned_loss=0.07675, over 17989.00 frames. ], tot_loss[loss=0.2375, simple_loss=0.3116, pruned_loss=0.08168, over 3579023.04 frames. ], batch size: 65, lr: 2.24e-02, grad_scale: 8.0 2023-03-08 18:47:00,087 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5256, 4.2641, 4.4412, 3.2523, 3.4906, 3.5747, 2.2935, 1.8303], device='cuda:3'), covar=tensor([0.0210, 0.0206, 0.0038, 0.0288, 0.0329, 0.0130, 0.0780, 0.1012], device='cuda:3'), in_proj_covar=tensor([0.0039, 0.0038, 0.0031, 0.0045, 0.0061, 0.0035, 0.0063, 0.0069], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0003, 0.0004, 0.0002, 0.0004, 0.0004], device='cuda:3') 2023-03-08 18:47:06,231 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.304e+02 5.042e+02 5.995e+02 7.754e+02 1.495e+03, threshold=1.199e+03, percent-clipped=7.0 2023-03-08 18:47:21,467 INFO [train.py:898] (3/4) Epoch 5, batch 1200, loss[loss=0.3323, simple_loss=0.3718, pruned_loss=0.1464, over 12007.00 frames. ], tot_loss[loss=0.2378, simple_loss=0.3117, pruned_loss=0.08191, over 3572767.48 frames. ], batch size: 129, lr: 2.23e-02, grad_scale: 8.0 2023-03-08 18:47:27,507 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7475, 2.6853, 4.0474, 4.0761, 2.3758, 4.5051, 3.7460, 2.7927], device='cuda:3'), covar=tensor([0.0283, 0.1016, 0.0138, 0.0171, 0.1409, 0.0087, 0.0337, 0.0875], device='cuda:3'), in_proj_covar=tensor([0.0136, 0.0175, 0.0096, 0.0102, 0.0187, 0.0124, 0.0138, 0.0174], device='cuda:3'), out_proj_covar=tensor([1.3259e-04, 1.6971e-04, 1.0114e-04, 9.8163e-05, 1.7872e-04, 1.1711e-04, 1.4335e-04, 1.7397e-04], device='cuda:3') 2023-03-08 18:48:19,538 INFO [train.py:898] (3/4) Epoch 5, batch 1250, loss[loss=0.2135, simple_loss=0.2797, pruned_loss=0.07366, over 18421.00 frames. ], tot_loss[loss=0.2362, simple_loss=0.3104, pruned_loss=0.08102, over 3588569.37 frames. ], batch size: 43, lr: 2.23e-02, grad_scale: 8.0 2023-03-08 18:49:03,913 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.909e+02 4.343e+02 5.272e+02 6.723e+02 1.264e+03, threshold=1.054e+03, percent-clipped=1.0 2023-03-08 18:49:09,582 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.48 vs. limit=2.0 2023-03-08 18:49:18,483 INFO [train.py:898] (3/4) Epoch 5, batch 1300, loss[loss=0.2181, simple_loss=0.284, pruned_loss=0.07608, over 17744.00 frames. ], tot_loss[loss=0.2366, simple_loss=0.3107, pruned_loss=0.08123, over 3586413.19 frames. ], batch size: 39, lr: 2.23e-02, grad_scale: 4.0 2023-03-08 18:49:35,235 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=15851.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:50:12,194 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=15883.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:50:16,770 INFO [train.py:898] (3/4) Epoch 5, batch 1350, loss[loss=0.2643, simple_loss=0.3393, pruned_loss=0.09469, over 18237.00 frames. ], tot_loss[loss=0.2362, simple_loss=0.3104, pruned_loss=0.08098, over 3589828.77 frames. ], batch size: 60, lr: 2.22e-02, grad_scale: 4.0 2023-03-08 18:50:31,445 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=15899.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:50:51,452 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-08 18:51:02,178 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.900e+02 4.519e+02 5.491e+02 6.869e+02 1.432e+03, threshold=1.098e+03, percent-clipped=4.0 2023-03-08 18:51:11,715 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3151, 5.9408, 5.4395, 5.6735, 5.2696, 5.5161, 5.9584, 5.8665], device='cuda:3'), covar=tensor([0.1278, 0.0576, 0.0470, 0.0716, 0.1910, 0.0650, 0.0584, 0.0670], device='cuda:3'), in_proj_covar=tensor([0.0401, 0.0317, 0.0247, 0.0355, 0.0496, 0.0351, 0.0394, 0.0328], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-08 18:51:15,443 INFO [train.py:898] (3/4) Epoch 5, batch 1400, loss[loss=0.2173, simple_loss=0.2984, pruned_loss=0.06813, over 18389.00 frames. ], tot_loss[loss=0.236, simple_loss=0.31, pruned_loss=0.08101, over 3599571.78 frames. ], batch size: 52, lr: 2.22e-02, grad_scale: 4.0 2023-03-08 18:51:25,015 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=15944.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:51:54,386 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.74 vs. limit=5.0 2023-03-08 18:52:01,945 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1779, 4.0496, 5.2087, 3.3976, 4.4102, 2.8784, 3.0465, 2.3282], device='cuda:3'), covar=tensor([0.0479, 0.0416, 0.0050, 0.0306, 0.0394, 0.1382, 0.1425, 0.1182], device='cuda:3'), in_proj_covar=tensor([0.0158, 0.0163, 0.0081, 0.0128, 0.0177, 0.0211, 0.0180, 0.0173], device='cuda:3'), out_proj_covar=tensor([1.5458e-04, 1.6375e-04, 8.5035e-05, 1.2815e-04, 1.7776e-04, 2.0807e-04, 1.8902e-04, 1.7303e-04], device='cuda:3') 2023-03-08 18:52:13,986 INFO [train.py:898] (3/4) Epoch 5, batch 1450, loss[loss=0.2468, simple_loss=0.329, pruned_loss=0.08229, over 18251.00 frames. ], tot_loss[loss=0.2364, simple_loss=0.3103, pruned_loss=0.08131, over 3590320.61 frames. ], batch size: 57, lr: 2.22e-02, grad_scale: 4.0 2023-03-08 18:52:22,342 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7045, 4.0035, 2.0964, 4.0081, 4.6012, 2.3250, 3.5165, 3.4632], device='cuda:3'), covar=tensor([0.0058, 0.0604, 0.1485, 0.0374, 0.0044, 0.1260, 0.0636, 0.0850], device='cuda:3'), in_proj_covar=tensor([0.0080, 0.0157, 0.0175, 0.0169, 0.0070, 0.0163, 0.0184, 0.0181], device='cuda:3'), out_proj_covar=tensor([1.1050e-04, 2.2419e-04, 2.2877e-04, 2.2737e-04, 9.6964e-05, 2.2120e-04, 2.3866e-04, 2.4188e-04], device='cuda:3') 2023-03-08 18:52:36,568 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=16001.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:52:42,089 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4874, 5.1105, 5.5859, 5.3715, 5.4070, 6.0180, 5.6361, 5.5004], device='cuda:3'), covar=tensor([0.0753, 0.0536, 0.0597, 0.0517, 0.1145, 0.0668, 0.0562, 0.1269], device='cuda:3'), in_proj_covar=tensor([0.0237, 0.0173, 0.0181, 0.0181, 0.0221, 0.0265, 0.0170, 0.0253], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 18:52:58,546 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6872, 4.5938, 4.6096, 4.5252, 4.4838, 4.6070, 4.9255, 4.9333], device='cuda:3'), covar=tensor([0.0069, 0.0091, 0.0104, 0.0106, 0.0080, 0.0102, 0.0135, 0.0116], device='cuda:3'), in_proj_covar=tensor([0.0065, 0.0051, 0.0049, 0.0063, 0.0055, 0.0072, 0.0060, 0.0058], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-08 18:53:04,648 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.050e+02 4.332e+02 5.250e+02 6.313e+02 1.611e+03, threshold=1.050e+03, percent-clipped=2.0 2023-03-08 18:53:16,910 INFO [train.py:898] (3/4) Epoch 5, batch 1500, loss[loss=0.2397, simple_loss=0.3196, pruned_loss=0.07988, over 18311.00 frames. ], tot_loss[loss=0.2359, simple_loss=0.31, pruned_loss=0.08087, over 3598928.24 frames. ], batch size: 54, lr: 2.21e-02, grad_scale: 4.0 2023-03-08 18:53:23,073 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-08 18:53:47,570 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=16062.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:54:05,052 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7876, 2.7287, 4.2522, 4.0265, 2.5022, 4.6257, 3.7780, 2.7771], device='cuda:3'), covar=tensor([0.0303, 0.1185, 0.0157, 0.0193, 0.1515, 0.0101, 0.0407, 0.1041], device='cuda:3'), in_proj_covar=tensor([0.0141, 0.0177, 0.0096, 0.0103, 0.0186, 0.0129, 0.0139, 0.0172], device='cuda:3'), out_proj_covar=tensor([1.3599e-04, 1.7193e-04, 1.0246e-04, 9.8750e-05, 1.7818e-04, 1.2127e-04, 1.4364e-04, 1.7211e-04], device='cuda:3') 2023-03-08 18:54:14,592 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=16085.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:54:16,407 INFO [train.py:898] (3/4) Epoch 5, batch 1550, loss[loss=0.2207, simple_loss=0.3049, pruned_loss=0.06819, over 18636.00 frames. ], tot_loss[loss=0.2353, simple_loss=0.3097, pruned_loss=0.08044, over 3613332.19 frames. ], batch size: 52, lr: 2.21e-02, grad_scale: 4.0 2023-03-08 18:54:31,862 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2198, 1.9466, 3.4747, 3.0755, 3.5995, 5.0812, 4.4779, 4.7174], device='cuda:3'), covar=tensor([0.0321, 0.0902, 0.0652, 0.0506, 0.0913, 0.0027, 0.0180, 0.0094], device='cuda:3'), in_proj_covar=tensor([0.0146, 0.0191, 0.0151, 0.0185, 0.0276, 0.0104, 0.0158, 0.0136], device='cuda:3'), out_proj_covar=tensor([1.1059e-04, 1.4565e-04, 1.2406e-04, 1.3011e-04, 2.0723e-04, 7.2853e-05, 1.2026e-04, 1.0216e-04], device='cuda:3') 2023-03-08 18:55:01,841 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.44 vs. limit=2.0 2023-03-08 18:55:02,223 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.998e+02 4.560e+02 5.884e+02 7.024e+02 2.005e+03, threshold=1.177e+03, percent-clipped=3.0 2023-03-08 18:55:08,257 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5078, 4.0609, 4.2116, 3.1571, 3.3326, 3.3201, 2.1454, 2.0004], device='cuda:3'), covar=tensor([0.0154, 0.0205, 0.0050, 0.0233, 0.0307, 0.0141, 0.0755, 0.0907], device='cuda:3'), in_proj_covar=tensor([0.0038, 0.0037, 0.0030, 0.0044, 0.0059, 0.0036, 0.0061, 0.0066], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0003, 0.0004, 0.0002, 0.0004, 0.0004], device='cuda:3') 2023-03-08 18:55:14,721 INFO [train.py:898] (3/4) Epoch 5, batch 1600, loss[loss=0.2268, simple_loss=0.2994, pruned_loss=0.0771, over 18353.00 frames. ], tot_loss[loss=0.2353, simple_loss=0.3098, pruned_loss=0.08043, over 3599192.82 frames. ], batch size: 46, lr: 2.21e-02, grad_scale: 8.0 2023-03-08 18:55:15,104 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2819, 5.2732, 4.6623, 5.2267, 5.2606, 4.6591, 5.1892, 4.7753], device='cuda:3'), covar=tensor([0.0373, 0.0434, 0.1772, 0.0553, 0.0440, 0.0372, 0.0344, 0.0865], device='cuda:3'), in_proj_covar=tensor([0.0300, 0.0335, 0.0490, 0.0272, 0.0256, 0.0308, 0.0326, 0.0413], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-08 18:55:24,999 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7075, 3.3397, 3.2314, 2.8042, 3.3260, 2.7638, 2.4624, 3.6940], device='cuda:3'), covar=tensor([0.0028, 0.0088, 0.0062, 0.0110, 0.0064, 0.0138, 0.0184, 0.0038], device='cuda:3'), in_proj_covar=tensor([0.0052, 0.0067, 0.0062, 0.0096, 0.0060, 0.0097, 0.0108, 0.0057], device='cuda:3'), out_proj_covar=tensor([7.7454e-05, 1.0760e-04, 1.0058e-04, 1.5645e-04, 9.2474e-05, 1.5499e-04, 1.7276e-04, 8.9572e-05], device='cuda:3') 2023-03-08 18:55:26,155 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=16146.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:55:29,594 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=16149.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:56:14,002 INFO [train.py:898] (3/4) Epoch 5, batch 1650, loss[loss=0.2142, simple_loss=0.2904, pruned_loss=0.06895, over 18400.00 frames. ], tot_loss[loss=0.2352, simple_loss=0.3098, pruned_loss=0.08031, over 3600713.95 frames. ], batch size: 48, lr: 2.20e-02, grad_scale: 8.0 2023-03-08 18:56:42,362 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=16210.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:57:00,575 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-08 18:57:00,821 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.794e+02 4.512e+02 5.424e+02 6.724e+02 1.524e+03, threshold=1.085e+03, percent-clipped=4.0 2023-03-08 18:57:13,385 INFO [train.py:898] (3/4) Epoch 5, batch 1700, loss[loss=0.2474, simple_loss=0.3248, pruned_loss=0.085, over 15928.00 frames. ], tot_loss[loss=0.2352, simple_loss=0.3098, pruned_loss=0.08028, over 3588925.03 frames. ], batch size: 94, lr: 2.20e-02, grad_scale: 8.0 2023-03-08 18:57:15,936 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=16239.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:57:46,389 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4594, 5.4463, 4.8843, 5.4004, 5.4357, 4.9115, 5.3861, 5.0290], device='cuda:3'), covar=tensor([0.0402, 0.0363, 0.1841, 0.0786, 0.0435, 0.0384, 0.0383, 0.0757], device='cuda:3'), in_proj_covar=tensor([0.0300, 0.0336, 0.0489, 0.0270, 0.0253, 0.0309, 0.0326, 0.0416], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-08 18:58:12,619 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.65 vs. limit=2.0 2023-03-08 18:58:13,012 INFO [train.py:898] (3/4) Epoch 5, batch 1750, loss[loss=0.2633, simple_loss=0.3369, pruned_loss=0.09484, over 17969.00 frames. ], tot_loss[loss=0.2345, simple_loss=0.3092, pruned_loss=0.07992, over 3585399.26 frames. ], batch size: 65, lr: 2.20e-02, grad_scale: 8.0 2023-03-08 18:58:58,896 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.349e+02 4.407e+02 5.169e+02 6.479e+02 1.420e+03, threshold=1.034e+03, percent-clipped=4.0 2023-03-08 18:59:05,575 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.49 vs. limit=2.0 2023-03-08 18:59:11,659 INFO [train.py:898] (3/4) Epoch 5, batch 1800, loss[loss=0.2473, simple_loss=0.3263, pruned_loss=0.0842, over 18562.00 frames. ], tot_loss[loss=0.2345, simple_loss=0.3093, pruned_loss=0.07984, over 3586786.37 frames. ], batch size: 54, lr: 2.19e-02, grad_scale: 8.0 2023-03-08 18:59:35,165 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=16357.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 18:59:52,294 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=16371.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:00:10,468 INFO [train.py:898] (3/4) Epoch 5, batch 1850, loss[loss=0.2991, simple_loss=0.3547, pruned_loss=0.1217, over 12884.00 frames. ], tot_loss[loss=0.235, simple_loss=0.3092, pruned_loss=0.08036, over 3587323.11 frames. ], batch size: 130, lr: 2.19e-02, grad_scale: 8.0 2023-03-08 19:00:51,836 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4185, 4.4038, 2.1522, 4.4845, 5.3740, 2.5680, 4.1246, 3.7211], device='cuda:3'), covar=tensor([0.0040, 0.0854, 0.1633, 0.0372, 0.0023, 0.1227, 0.0495, 0.0738], device='cuda:3'), in_proj_covar=tensor([0.0079, 0.0157, 0.0177, 0.0170, 0.0071, 0.0165, 0.0182, 0.0179], device='cuda:3'), out_proj_covar=tensor([1.1132e-04, 2.2431e-04, 2.3087e-04, 2.2919e-04, 9.8822e-05, 2.2375e-04, 2.3742e-04, 2.4072e-04], device='cuda:3') 2023-03-08 19:00:55,913 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.146e+02 4.944e+02 5.980e+02 8.174e+02 1.619e+03, threshold=1.196e+03, percent-clipped=7.0 2023-03-08 19:01:03,120 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=16432.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:01:09,065 INFO [train.py:898] (3/4) Epoch 5, batch 1900, loss[loss=0.2218, simple_loss=0.2997, pruned_loss=0.07198, over 18500.00 frames. ], tot_loss[loss=0.2349, simple_loss=0.3091, pruned_loss=0.08032, over 3588764.61 frames. ], batch size: 51, lr: 2.19e-02, grad_scale: 8.0 2023-03-08 19:01:13,689 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=16441.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:02:07,326 INFO [train.py:898] (3/4) Epoch 5, batch 1950, loss[loss=0.2025, simple_loss=0.2758, pruned_loss=0.06458, over 18400.00 frames. ], tot_loss[loss=0.2349, simple_loss=0.3091, pruned_loss=0.08033, over 3602695.50 frames. ], batch size: 42, lr: 2.19e-02, grad_scale: 8.0 2023-03-08 19:02:28,072 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=16505.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:02:53,788 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.181e+02 4.721e+02 5.735e+02 6.721e+02 1.938e+03, threshold=1.147e+03, percent-clipped=5.0 2023-03-08 19:03:06,026 INFO [train.py:898] (3/4) Epoch 5, batch 2000, loss[loss=0.1983, simple_loss=0.2681, pruned_loss=0.06428, over 18495.00 frames. ], tot_loss[loss=0.2347, simple_loss=0.309, pruned_loss=0.08019, over 3604435.39 frames. ], batch size: 47, lr: 2.18e-02, grad_scale: 8.0 2023-03-08 19:03:09,094 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=16539.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:04:04,801 INFO [train.py:898] (3/4) Epoch 5, batch 2050, loss[loss=0.2277, simple_loss=0.2949, pruned_loss=0.08028, over 18432.00 frames. ], tot_loss[loss=0.2348, simple_loss=0.3096, pruned_loss=0.08006, over 3614070.68 frames. ], batch size: 42, lr: 2.18e-02, grad_scale: 8.0 2023-03-08 19:04:05,082 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=16587.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:04:47,533 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7533, 3.6017, 3.3090, 2.9501, 3.2440, 2.7127, 2.6402, 3.7885], device='cuda:3'), covar=tensor([0.0026, 0.0079, 0.0079, 0.0102, 0.0072, 0.0138, 0.0159, 0.0040], device='cuda:3'), in_proj_covar=tensor([0.0053, 0.0066, 0.0062, 0.0097, 0.0062, 0.0100, 0.0107, 0.0058], device='cuda:3'), out_proj_covar=tensor([7.8568e-05, 1.0486e-04, 1.0093e-04, 1.5693e-04, 9.5841e-05, 1.6029e-04, 1.6866e-04, 9.0238e-05], device='cuda:3') 2023-03-08 19:04:51,607 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.823e+02 4.363e+02 5.469e+02 6.595e+02 1.217e+03, threshold=1.094e+03, percent-clipped=2.0 2023-03-08 19:04:55,113 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.72 vs. limit=2.0 2023-03-08 19:05:04,762 INFO [train.py:898] (3/4) Epoch 5, batch 2100, loss[loss=0.2226, simple_loss=0.2985, pruned_loss=0.07332, over 18355.00 frames. ], tot_loss[loss=0.2344, simple_loss=0.3095, pruned_loss=0.07969, over 3614419.71 frames. ], batch size: 46, lr: 2.18e-02, grad_scale: 8.0 2023-03-08 19:05:24,091 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4428, 2.5793, 2.6168, 2.7514, 3.2758, 3.4166, 2.5618, 3.0480], device='cuda:3'), covar=tensor([0.0274, 0.0363, 0.0629, 0.0358, 0.0275, 0.0193, 0.0542, 0.0264], device='cuda:3'), in_proj_covar=tensor([0.0098, 0.0074, 0.0128, 0.0100, 0.0072, 0.0054, 0.0091, 0.0097], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 19:05:28,487 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=16657.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:05:38,326 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.74 vs. limit=5.0 2023-03-08 19:06:04,032 INFO [train.py:898] (3/4) Epoch 5, batch 2150, loss[loss=0.2272, simple_loss=0.3103, pruned_loss=0.07209, over 18404.00 frames. ], tot_loss[loss=0.233, simple_loss=0.308, pruned_loss=0.07898, over 3613162.55 frames. ], batch size: 52, lr: 2.17e-02, grad_scale: 8.0 2023-03-08 19:06:24,762 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=16705.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:06:49,438 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.208e+02 4.846e+02 5.504e+02 7.182e+02 1.365e+03, threshold=1.101e+03, percent-clipped=1.0 2023-03-08 19:06:50,857 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=16727.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:07:00,395 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1054, 3.2918, 2.2874, 3.5400, 4.0336, 2.3592, 3.1656, 3.1795], device='cuda:3'), covar=tensor([0.0084, 0.1220, 0.1292, 0.0397, 0.0070, 0.1225, 0.0595, 0.0668], device='cuda:3'), in_proj_covar=tensor([0.0079, 0.0159, 0.0172, 0.0169, 0.0071, 0.0162, 0.0181, 0.0175], device='cuda:3'), out_proj_covar=tensor([1.1030e-04, 2.2642e-04, 2.2656e-04, 2.2762e-04, 9.7328e-05, 2.2113e-04, 2.3670e-04, 2.3610e-04], device='cuda:3') 2023-03-08 19:07:02,270 INFO [train.py:898] (3/4) Epoch 5, batch 2200, loss[loss=0.2227, simple_loss=0.3065, pruned_loss=0.06946, over 18371.00 frames. ], tot_loss[loss=0.2341, simple_loss=0.3088, pruned_loss=0.07966, over 3592197.15 frames. ], batch size: 50, lr: 2.17e-02, grad_scale: 8.0 2023-03-08 19:07:06,978 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=16741.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:07:18,810 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2901, 5.8543, 5.2464, 5.5683, 5.3126, 5.3921, 5.8779, 5.8085], device='cuda:3'), covar=tensor([0.0967, 0.0485, 0.0381, 0.0630, 0.1285, 0.0554, 0.0477, 0.0545], device='cuda:3'), in_proj_covar=tensor([0.0399, 0.0329, 0.0256, 0.0354, 0.0495, 0.0358, 0.0411, 0.0333], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-08 19:07:23,339 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7180, 4.3855, 4.6373, 3.5337, 3.4682, 3.6266, 2.5921, 1.9387], device='cuda:3'), covar=tensor([0.0133, 0.0147, 0.0036, 0.0231, 0.0343, 0.0131, 0.0635, 0.1034], device='cuda:3'), in_proj_covar=tensor([0.0040, 0.0039, 0.0032, 0.0047, 0.0064, 0.0038, 0.0063, 0.0069], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0004, 0.0002, 0.0004, 0.0004], device='cuda:3') 2023-03-08 19:08:01,114 INFO [train.py:898] (3/4) Epoch 5, batch 2250, loss[loss=0.2476, simple_loss=0.3234, pruned_loss=0.08588, over 18270.00 frames. ], tot_loss[loss=0.2353, simple_loss=0.3101, pruned_loss=0.08024, over 3588817.55 frames. ], batch size: 57, lr: 2.17e-02, grad_scale: 8.0 2023-03-08 19:08:03,436 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=16789.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:08:22,325 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=16805.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:08:26,957 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0245, 4.5563, 4.8177, 3.6907, 3.6942, 3.6508, 2.7213, 2.4474], device='cuda:3'), covar=tensor([0.0127, 0.0190, 0.0045, 0.0225, 0.0277, 0.0128, 0.0658, 0.0846], device='cuda:3'), in_proj_covar=tensor([0.0039, 0.0038, 0.0032, 0.0047, 0.0063, 0.0037, 0.0063, 0.0068], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0004, 0.0002, 0.0004, 0.0004], device='cuda:3') 2023-03-08 19:08:28,489 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.88 vs. limit=2.0 2023-03-08 19:08:46,492 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.297e+02 4.310e+02 5.501e+02 7.389e+02 1.420e+03, threshold=1.100e+03, percent-clipped=2.0 2023-03-08 19:09:00,035 INFO [train.py:898] (3/4) Epoch 5, batch 2300, loss[loss=0.2245, simple_loss=0.2909, pruned_loss=0.07904, over 18499.00 frames. ], tot_loss[loss=0.2356, simple_loss=0.3102, pruned_loss=0.08046, over 3587782.08 frames. ], batch size: 44, lr: 2.16e-02, grad_scale: 8.0 2023-03-08 19:09:10,805 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-08 19:09:17,948 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=16853.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:09:35,584 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=16868.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 19:09:42,879 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.50 vs. limit=2.0 2023-03-08 19:09:57,301 INFO [train.py:898] (3/4) Epoch 5, batch 2350, loss[loss=0.1957, simple_loss=0.2708, pruned_loss=0.06034, over 18367.00 frames. ], tot_loss[loss=0.2353, simple_loss=0.31, pruned_loss=0.08032, over 3600817.56 frames. ], batch size: 42, lr: 2.16e-02, grad_scale: 8.0 2023-03-08 19:10:19,837 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5964, 3.5533, 2.0020, 4.2959, 2.9482, 4.7188, 2.3307, 4.2277], device='cuda:3'), covar=tensor([0.0507, 0.0715, 0.1380, 0.0382, 0.0927, 0.0123, 0.1121, 0.0255], device='cuda:3'), in_proj_covar=tensor([0.0156, 0.0192, 0.0166, 0.0166, 0.0166, 0.0122, 0.0170, 0.0161], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 19:10:19,841 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=16906.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:10:42,828 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.402e+02 4.892e+02 6.073e+02 7.229e+02 1.276e+03, threshold=1.215e+03, percent-clipped=6.0 2023-03-08 19:10:47,120 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=16929.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 19:10:56,187 INFO [train.py:898] (3/4) Epoch 5, batch 2400, loss[loss=0.2331, simple_loss=0.3017, pruned_loss=0.08222, over 18501.00 frames. ], tot_loss[loss=0.2353, simple_loss=0.31, pruned_loss=0.08032, over 3598318.38 frames. ], batch size: 51, lr: 2.16e-02, grad_scale: 8.0 2023-03-08 19:11:31,356 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=16967.0, num_to_drop=1, layers_to_drop={3} 2023-03-08 19:11:49,685 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4634, 6.0600, 5.3833, 5.7730, 5.5860, 5.5848, 6.1421, 6.0577], device='cuda:3'), covar=tensor([0.0973, 0.0504, 0.0359, 0.0652, 0.1307, 0.0622, 0.0415, 0.0483], device='cuda:3'), in_proj_covar=tensor([0.0399, 0.0331, 0.0254, 0.0356, 0.0495, 0.0360, 0.0404, 0.0336], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-08 19:11:54,510 INFO [train.py:898] (3/4) Epoch 5, batch 2450, loss[loss=0.2148, simple_loss=0.2894, pruned_loss=0.07015, over 18352.00 frames. ], tot_loss[loss=0.2345, simple_loss=0.3092, pruned_loss=0.0799, over 3588471.27 frames. ], batch size: 46, lr: 2.16e-02, grad_scale: 8.0 2023-03-08 19:12:02,787 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8289, 1.9137, 3.1541, 2.7805, 3.3528, 4.8724, 4.3289, 4.2910], device='cuda:3'), covar=tensor([0.0383, 0.0921, 0.0821, 0.0634, 0.1030, 0.0029, 0.0206, 0.0144], device='cuda:3'), in_proj_covar=tensor([0.0152, 0.0198, 0.0164, 0.0192, 0.0283, 0.0108, 0.0169, 0.0144], device='cuda:3'), out_proj_covar=tensor([1.1346e-04, 1.4813e-04, 1.3213e-04, 1.3339e-04, 2.1034e-04, 7.4415e-05, 1.2667e-04, 1.0838e-04], device='cuda:3') 2023-03-08 19:12:35,423 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4226, 5.0768, 5.6666, 5.5382, 5.4132, 6.2064, 5.8009, 5.5863], device='cuda:3'), covar=tensor([0.0906, 0.0586, 0.0608, 0.0438, 0.1058, 0.0645, 0.0387, 0.1151], device='cuda:3'), in_proj_covar=tensor([0.0245, 0.0186, 0.0193, 0.0187, 0.0232, 0.0270, 0.0177, 0.0270], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 19:12:39,637 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.725e+02 4.440e+02 5.461e+02 7.873e+02 1.788e+03, threshold=1.092e+03, percent-clipped=5.0 2023-03-08 19:12:41,093 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=17027.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:12:53,129 INFO [train.py:898] (3/4) Epoch 5, batch 2500, loss[loss=0.2332, simple_loss=0.3129, pruned_loss=0.07678, over 18584.00 frames. ], tot_loss[loss=0.2337, simple_loss=0.3087, pruned_loss=0.07935, over 3599993.69 frames. ], batch size: 54, lr: 2.15e-02, grad_scale: 8.0 2023-03-08 19:13:37,616 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=17075.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:13:50,994 INFO [train.py:898] (3/4) Epoch 5, batch 2550, loss[loss=0.2494, simple_loss=0.3307, pruned_loss=0.08404, over 17819.00 frames. ], tot_loss[loss=0.2346, simple_loss=0.3097, pruned_loss=0.07973, over 3600565.61 frames. ], batch size: 70, lr: 2.15e-02, grad_scale: 8.0 2023-03-08 19:13:55,869 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=2.00 vs. limit=2.0 2023-03-08 19:14:36,785 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.128e+02 4.504e+02 5.454e+02 6.580e+02 1.151e+03, threshold=1.091e+03, percent-clipped=2.0 2023-03-08 19:14:49,069 INFO [train.py:898] (3/4) Epoch 5, batch 2600, loss[loss=0.2375, simple_loss=0.3185, pruned_loss=0.07827, over 18495.00 frames. ], tot_loss[loss=0.2349, simple_loss=0.31, pruned_loss=0.07985, over 3597178.40 frames. ], batch size: 53, lr: 2.15e-02, grad_scale: 8.0 2023-03-08 19:14:52,010 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.82 vs. limit=2.0 2023-03-08 19:15:47,080 INFO [train.py:898] (3/4) Epoch 5, batch 2650, loss[loss=0.2648, simple_loss=0.3381, pruned_loss=0.09573, over 18471.00 frames. ], tot_loss[loss=0.2352, simple_loss=0.3103, pruned_loss=0.08006, over 3588813.31 frames. ], batch size: 51, lr: 2.14e-02, grad_scale: 8.0 2023-03-08 19:15:49,628 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=17189.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:15:52,218 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.76 vs. limit=2.0 2023-03-08 19:16:25,274 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6391, 5.3383, 5.2818, 5.1746, 4.8425, 5.1961, 4.4286, 5.0731], device='cuda:3'), covar=tensor([0.0182, 0.0228, 0.0210, 0.0241, 0.0360, 0.0199, 0.1165, 0.0273], device='cuda:3'), in_proj_covar=tensor([0.0129, 0.0172, 0.0158, 0.0149, 0.0166, 0.0170, 0.0241, 0.0154], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0006, 0.0005], device='cuda:3') 2023-03-08 19:16:31,332 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=17224.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 19:16:33,374 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.843e+02 4.713e+02 5.523e+02 7.197e+02 1.239e+03, threshold=1.105e+03, percent-clipped=3.0 2023-03-08 19:16:39,624 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.2321, 3.7201, 5.3263, 4.5145, 3.4504, 2.9921, 4.6993, 5.1740], device='cuda:3'), covar=tensor([0.0912, 0.1269, 0.0046, 0.0253, 0.0677, 0.0964, 0.0206, 0.0070], device='cuda:3'), in_proj_covar=tensor([0.0129, 0.0170, 0.0067, 0.0134, 0.0157, 0.0158, 0.0133, 0.0082], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0001, 0.0002, 0.0002, 0.0003, 0.0002, 0.0001], device='cuda:3') 2023-03-08 19:16:45,587 INFO [train.py:898] (3/4) Epoch 5, batch 2700, loss[loss=0.2347, simple_loss=0.3177, pruned_loss=0.07582, over 18474.00 frames. ], tot_loss[loss=0.2348, simple_loss=0.31, pruned_loss=0.07985, over 3582296.14 frames. ], batch size: 53, lr: 2.14e-02, grad_scale: 8.0 2023-03-08 19:17:01,445 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=17250.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:17:03,734 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9850, 4.4064, 4.6760, 3.5686, 3.7671, 3.7452, 2.5004, 1.8890], device='cuda:3'), covar=tensor([0.0159, 0.0189, 0.0042, 0.0212, 0.0348, 0.0149, 0.0704, 0.0931], device='cuda:3'), in_proj_covar=tensor([0.0040, 0.0037, 0.0032, 0.0046, 0.0063, 0.0039, 0.0062, 0.0068], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004], device='cuda:3') 2023-03-08 19:17:15,312 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=17262.0, num_to_drop=1, layers_to_drop={3} 2023-03-08 19:17:43,515 INFO [train.py:898] (3/4) Epoch 5, batch 2750, loss[loss=0.2071, simple_loss=0.2829, pruned_loss=0.06563, over 18243.00 frames. ], tot_loss[loss=0.2346, simple_loss=0.3097, pruned_loss=0.0797, over 3586637.32 frames. ], batch size: 45, lr: 2.14e-02, grad_scale: 8.0 2023-03-08 19:17:46,151 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4045, 5.8849, 5.3123, 5.6768, 5.3788, 5.5032, 5.9806, 5.8937], device='cuda:3'), covar=tensor([0.0962, 0.0578, 0.0475, 0.0640, 0.1407, 0.0513, 0.0454, 0.0522], device='cuda:3'), in_proj_covar=tensor([0.0403, 0.0336, 0.0259, 0.0365, 0.0505, 0.0366, 0.0413, 0.0338], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-08 19:18:29,772 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.562e+02 4.313e+02 5.357e+02 6.708e+02 1.255e+03, threshold=1.071e+03, percent-clipped=3.0 2023-03-08 19:18:42,601 INFO [train.py:898] (3/4) Epoch 5, batch 2800, loss[loss=0.2118, simple_loss=0.289, pruned_loss=0.06731, over 18390.00 frames. ], tot_loss[loss=0.2345, simple_loss=0.3096, pruned_loss=0.07973, over 3578364.59 frames. ], batch size: 52, lr: 2.14e-02, grad_scale: 8.0 2023-03-08 19:19:12,212 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=17362.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:19:36,989 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=17383.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 19:19:41,194 INFO [train.py:898] (3/4) Epoch 5, batch 2850, loss[loss=0.2594, simple_loss=0.3347, pruned_loss=0.09203, over 18461.00 frames. ], tot_loss[loss=0.2356, simple_loss=0.3101, pruned_loss=0.08054, over 3566071.37 frames. ], batch size: 59, lr: 2.13e-02, grad_scale: 8.0 2023-03-08 19:19:46,116 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.2278, 3.4801, 5.1493, 4.3027, 3.0210, 3.0817, 4.3960, 5.2330], device='cuda:3'), covar=tensor([0.0996, 0.1367, 0.0042, 0.0287, 0.0845, 0.0973, 0.0264, 0.0070], device='cuda:3'), in_proj_covar=tensor([0.0130, 0.0173, 0.0066, 0.0134, 0.0160, 0.0160, 0.0136, 0.0086], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0001, 0.0002, 0.0002, 0.0003, 0.0002, 0.0001], device='cuda:3') 2023-03-08 19:20:24,492 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=17423.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:20:27,501 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.643e+02 4.399e+02 5.133e+02 6.375e+02 2.228e+03, threshold=1.027e+03, percent-clipped=3.0 2023-03-08 19:20:40,826 INFO [train.py:898] (3/4) Epoch 5, batch 2900, loss[loss=0.2356, simple_loss=0.3139, pruned_loss=0.07865, over 18649.00 frames. ], tot_loss[loss=0.2351, simple_loss=0.3097, pruned_loss=0.08031, over 3568964.91 frames. ], batch size: 52, lr: 2.13e-02, grad_scale: 8.0 2023-03-08 19:20:49,201 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=17444.0, num_to_drop=1, layers_to_drop={3} 2023-03-08 19:21:33,120 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.58 vs. limit=2.0 2023-03-08 19:21:38,382 INFO [train.py:898] (3/4) Epoch 5, batch 2950, loss[loss=0.2404, simple_loss=0.324, pruned_loss=0.07835, over 18188.00 frames. ], tot_loss[loss=0.2352, simple_loss=0.3101, pruned_loss=0.08014, over 3582586.06 frames. ], batch size: 60, lr: 2.13e-02, grad_scale: 8.0 2023-03-08 19:21:59,236 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.61 vs. limit=2.0 2023-03-08 19:22:22,779 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=17524.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 19:22:24,449 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.921e+02 4.474e+02 5.609e+02 7.240e+02 1.382e+03, threshold=1.122e+03, percent-clipped=5.0 2023-03-08 19:22:36,712 INFO [train.py:898] (3/4) Epoch 5, batch 3000, loss[loss=0.2374, simple_loss=0.3111, pruned_loss=0.08183, over 18393.00 frames. ], tot_loss[loss=0.2351, simple_loss=0.3102, pruned_loss=0.08007, over 3583025.44 frames. ], batch size: 50, lr: 2.12e-02, grad_scale: 8.0 2023-03-08 19:22:36,713 INFO [train.py:923] (3/4) Computing validation loss 2023-03-08 19:22:47,298 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4736, 4.9022, 4.9038, 4.7955, 4.4712, 4.5238, 5.0391, 4.9103], device='cuda:3'), covar=tensor([0.1184, 0.0740, 0.0335, 0.0703, 0.1595, 0.0754, 0.0536, 0.0713], device='cuda:3'), in_proj_covar=tensor([0.0393, 0.0336, 0.0262, 0.0369, 0.0492, 0.0362, 0.0414, 0.0341], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-08 19:22:48,698 INFO [train.py:932] (3/4) Epoch 5, validation: loss=0.1806, simple_loss=0.2829, pruned_loss=0.03918, over 944034.00 frames. 2023-03-08 19:22:48,699 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-08 19:22:58,795 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=17545.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:23:19,545 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=17562.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 19:23:30,585 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=17572.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 19:23:47,550 INFO [train.py:898] (3/4) Epoch 5, batch 3050, loss[loss=0.2216, simple_loss=0.2979, pruned_loss=0.07266, over 18289.00 frames. ], tot_loss[loss=0.2346, simple_loss=0.3098, pruned_loss=0.07965, over 3588018.49 frames. ], batch size: 49, lr: 2.12e-02, grad_scale: 8.0 2023-03-08 19:24:15,394 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=17610.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:24:33,683 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.113e+02 4.467e+02 5.746e+02 6.739e+02 1.721e+03, threshold=1.149e+03, percent-clipped=3.0 2023-03-08 19:24:46,751 INFO [train.py:898] (3/4) Epoch 5, batch 3100, loss[loss=0.2723, simple_loss=0.3534, pruned_loss=0.09558, over 18467.00 frames. ], tot_loss[loss=0.2351, simple_loss=0.3103, pruned_loss=0.07994, over 3583030.72 frames. ], batch size: 59, lr: 2.12e-02, grad_scale: 8.0 2023-03-08 19:25:44,934 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8626, 2.6873, 4.1212, 4.1657, 2.2167, 4.4079, 3.8080, 2.7783], device='cuda:3'), covar=tensor([0.0300, 0.1134, 0.0164, 0.0156, 0.1448, 0.0143, 0.0342, 0.0837], device='cuda:3'), in_proj_covar=tensor([0.0151, 0.0188, 0.0104, 0.0112, 0.0189, 0.0142, 0.0152, 0.0173], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0001, 0.0001, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 19:25:45,574 INFO [train.py:898] (3/4) Epoch 5, batch 3150, loss[loss=0.2663, simple_loss=0.333, pruned_loss=0.09979, over 18492.00 frames. ], tot_loss[loss=0.2336, simple_loss=0.309, pruned_loss=0.07909, over 3588527.02 frames. ], batch size: 53, lr: 2.12e-02, grad_scale: 8.0 2023-03-08 19:26:00,887 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=17700.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:26:22,152 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=17718.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:26:29,291 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9623, 3.0673, 4.3466, 4.0790, 2.5815, 4.7581, 3.7326, 2.9315], device='cuda:3'), covar=tensor([0.0327, 0.0998, 0.0126, 0.0201, 0.1381, 0.0100, 0.0485, 0.0855], device='cuda:3'), in_proj_covar=tensor([0.0152, 0.0189, 0.0104, 0.0112, 0.0191, 0.0143, 0.0154, 0.0175], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0001, 0.0001, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 19:26:31,144 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.328e+02 4.460e+02 5.383e+02 6.414e+02 1.196e+03, threshold=1.077e+03, percent-clipped=2.0 2023-03-08 19:26:35,924 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=17730.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:26:44,014 INFO [train.py:898] (3/4) Epoch 5, batch 3200, loss[loss=0.2149, simple_loss=0.2893, pruned_loss=0.07026, over 18358.00 frames. ], tot_loss[loss=0.2328, simple_loss=0.3082, pruned_loss=0.07871, over 3589422.35 frames. ], batch size: 50, lr: 2.11e-02, grad_scale: 8.0 2023-03-08 19:26:46,685 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=17739.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 19:26:55,582 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8585, 4.9332, 5.0595, 4.7179, 4.9233, 4.7231, 5.2517, 5.2468], device='cuda:3'), covar=tensor([0.0061, 0.0059, 0.0063, 0.0089, 0.0055, 0.0108, 0.0071, 0.0080], device='cuda:3'), in_proj_covar=tensor([0.0065, 0.0050, 0.0050, 0.0063, 0.0054, 0.0071, 0.0060, 0.0060], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-08 19:27:12,081 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=17761.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:27:40,175 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7278, 2.6250, 4.1368, 4.0312, 2.3407, 4.5169, 3.9294, 2.3316], device='cuda:3'), covar=tensor([0.0346, 0.1162, 0.0174, 0.0169, 0.1479, 0.0101, 0.0312, 0.1023], device='cuda:3'), in_proj_covar=tensor([0.0150, 0.0187, 0.0103, 0.0110, 0.0188, 0.0142, 0.0151, 0.0173], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0001, 0.0001, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 19:27:42,483 INFO [train.py:898] (3/4) Epoch 5, batch 3250, loss[loss=0.2509, simple_loss=0.319, pruned_loss=0.09143, over 18522.00 frames. ], tot_loss[loss=0.2328, simple_loss=0.3083, pruned_loss=0.07866, over 3593100.82 frames. ], batch size: 49, lr: 2.11e-02, grad_scale: 8.0 2023-03-08 19:27:47,484 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=17791.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:28:13,176 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5096, 5.4161, 4.9364, 5.3939, 5.3847, 4.8284, 5.3418, 4.8985], device='cuda:3'), covar=tensor([0.0353, 0.0401, 0.1559, 0.0675, 0.0435, 0.0362, 0.0317, 0.0931], device='cuda:3'), in_proj_covar=tensor([0.0309, 0.0346, 0.0504, 0.0271, 0.0255, 0.0324, 0.0332, 0.0437], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-08 19:28:28,461 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.849e+02 4.466e+02 5.558e+02 6.730e+02 1.712e+03, threshold=1.112e+03, percent-clipped=5.0 2023-03-08 19:28:40,498 INFO [train.py:898] (3/4) Epoch 5, batch 3300, loss[loss=0.3171, simple_loss=0.3583, pruned_loss=0.138, over 11930.00 frames. ], tot_loss[loss=0.2331, simple_loss=0.3081, pruned_loss=0.07904, over 3576348.40 frames. ], batch size: 131, lr: 2.11e-02, grad_scale: 16.0 2023-03-08 19:28:50,375 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=17845.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:29:26,210 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0397, 2.4673, 2.4097, 2.7336, 3.2610, 3.2610, 2.6208, 2.8608], device='cuda:3'), covar=tensor([0.0292, 0.0391, 0.0870, 0.0318, 0.0238, 0.0110, 0.0521, 0.0325], device='cuda:3'), in_proj_covar=tensor([0.0101, 0.0077, 0.0136, 0.0102, 0.0077, 0.0058, 0.0097, 0.0102], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 19:29:39,231 INFO [train.py:898] (3/4) Epoch 5, batch 3350, loss[loss=0.2191, simple_loss=0.2885, pruned_loss=0.0749, over 18436.00 frames. ], tot_loss[loss=0.2323, simple_loss=0.3075, pruned_loss=0.07858, over 3580352.71 frames. ], batch size: 42, lr: 2.11e-02, grad_scale: 16.0 2023-03-08 19:29:46,131 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=17893.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:30:25,122 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.611e+02 4.192e+02 5.439e+02 6.827e+02 1.413e+03, threshold=1.088e+03, percent-clipped=2.0 2023-03-08 19:30:28,558 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-08 19:30:30,604 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1482, 2.3239, 2.4572, 2.6824, 3.3609, 3.3057, 2.7653, 3.1531], device='cuda:3'), covar=tensor([0.0269, 0.0467, 0.0769, 0.0399, 0.0198, 0.0193, 0.0434, 0.0262], device='cuda:3'), in_proj_covar=tensor([0.0100, 0.0078, 0.0135, 0.0102, 0.0078, 0.0058, 0.0097, 0.0101], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 19:30:38,006 INFO [train.py:898] (3/4) Epoch 5, batch 3400, loss[loss=0.2481, simple_loss=0.3215, pruned_loss=0.08731, over 18256.00 frames. ], tot_loss[loss=0.2314, simple_loss=0.3065, pruned_loss=0.07812, over 3572506.22 frames. ], batch size: 60, lr: 2.10e-02, grad_scale: 16.0 2023-03-08 19:31:05,003 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=17960.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:31:16,149 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=6.18 vs. limit=5.0 2023-03-08 19:31:36,927 INFO [train.py:898] (3/4) Epoch 5, batch 3450, loss[loss=0.2557, simple_loss=0.3394, pruned_loss=0.08603, over 18372.00 frames. ], tot_loss[loss=0.2313, simple_loss=0.3064, pruned_loss=0.07807, over 3572396.55 frames. ], batch size: 55, lr: 2.10e-02, grad_scale: 16.0 2023-03-08 19:31:47,917 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.3750, 1.8603, 1.8728, 2.0229, 2.4067, 2.3935, 2.1434, 2.2257], device='cuda:3'), covar=tensor([0.0230, 0.0327, 0.0661, 0.0276, 0.0167, 0.0116, 0.0285, 0.0254], device='cuda:3'), in_proj_covar=tensor([0.0102, 0.0078, 0.0135, 0.0102, 0.0076, 0.0057, 0.0096, 0.0102], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 19:32:17,488 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18018.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:32:20,896 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18021.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:32:26,801 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.873e+02 4.562e+02 5.412e+02 6.904e+02 1.746e+03, threshold=1.082e+03, percent-clipped=5.0 2023-03-08 19:32:39,598 INFO [train.py:898] (3/4) Epoch 5, batch 3500, loss[loss=0.2432, simple_loss=0.3155, pruned_loss=0.08542, over 17175.00 frames. ], tot_loss[loss=0.2319, simple_loss=0.3068, pruned_loss=0.0785, over 3566402.44 frames. ], batch size: 78, lr: 2.10e-02, grad_scale: 16.0 2023-03-08 19:32:42,247 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18039.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 19:33:01,672 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18056.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:33:02,201 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-08 19:33:12,388 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=18066.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:33:17,152 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=5.73 vs. limit=5.0 2023-03-08 19:33:33,590 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18086.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:33:34,496 INFO [train.py:898] (3/4) Epoch 5, batch 3550, loss[loss=0.2393, simple_loss=0.321, pruned_loss=0.0788, over 17756.00 frames. ], tot_loss[loss=0.2319, simple_loss=0.307, pruned_loss=0.07845, over 3566981.25 frames. ], batch size: 70, lr: 2.09e-02, grad_scale: 16.0 2023-03-08 19:33:34,630 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=18087.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 19:33:54,985 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.99 vs. limit=2.0 2023-03-08 19:34:17,503 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.924e+02 4.145e+02 5.139e+02 6.642e+02 1.282e+03, threshold=1.028e+03, percent-clipped=2.0 2023-03-08 19:34:29,569 INFO [train.py:898] (3/4) Epoch 5, batch 3600, loss[loss=0.2343, simple_loss=0.3204, pruned_loss=0.07412, over 15872.00 frames. ], tot_loss[loss=0.231, simple_loss=0.3062, pruned_loss=0.07793, over 3559134.33 frames. ], batch size: 94, lr: 2.09e-02, grad_scale: 16.0 2023-03-08 19:35:34,900 INFO [train.py:898] (3/4) Epoch 6, batch 0, loss[loss=0.2291, simple_loss=0.306, pruned_loss=0.0761, over 18302.00 frames. ], tot_loss[loss=0.2291, simple_loss=0.306, pruned_loss=0.0761, over 18302.00 frames. ], batch size: 54, lr: 1.95e-02, grad_scale: 16.0 2023-03-08 19:35:34,900 INFO [train.py:923] (3/4) Computing validation loss 2023-03-08 19:35:45,085 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.2158, 3.9802, 4.1246, 3.9150, 4.0711, 3.9654, 4.3044, 4.2859], device='cuda:3'), covar=tensor([0.0063, 0.0092, 0.0078, 0.0090, 0.0074, 0.0100, 0.0082, 0.0088], device='cuda:3'), in_proj_covar=tensor([0.0061, 0.0046, 0.0047, 0.0058, 0.0052, 0.0068, 0.0057, 0.0056], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-08 19:35:46,590 INFO [train.py:932] (3/4) Epoch 6, validation: loss=0.1816, simple_loss=0.2843, pruned_loss=0.0395, over 944034.00 frames. 2023-03-08 19:35:46,591 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-08 19:35:50,286 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=18174.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:35:51,449 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=18175.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:36:10,424 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3174, 4.2520, 2.4140, 4.5339, 5.2424, 2.6127, 3.7746, 3.7325], device='cuda:3'), covar=tensor([0.0067, 0.1076, 0.1423, 0.0371, 0.0041, 0.1311, 0.0620, 0.0887], device='cuda:3'), in_proj_covar=tensor([0.0085, 0.0164, 0.0176, 0.0170, 0.0075, 0.0166, 0.0179, 0.0179], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 19:36:44,658 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-08 19:36:44,909 INFO [train.py:898] (3/4) Epoch 6, batch 50, loss[loss=0.2532, simple_loss=0.328, pruned_loss=0.08918, over 17976.00 frames. ], tot_loss[loss=0.2353, simple_loss=0.3116, pruned_loss=0.07954, over 802274.43 frames. ], batch size: 65, lr: 1.95e-02, grad_scale: 8.0 2023-03-08 19:36:52,254 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.724e+02 4.768e+02 5.691e+02 6.790e+02 1.877e+03, threshold=1.138e+03, percent-clipped=9.0 2023-03-08 19:37:01,564 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18235.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:37:02,648 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18236.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:37:43,157 INFO [train.py:898] (3/4) Epoch 6, batch 100, loss[loss=0.221, simple_loss=0.2967, pruned_loss=0.07271, over 18397.00 frames. ], tot_loss[loss=0.2297, simple_loss=0.306, pruned_loss=0.07671, over 1434433.85 frames. ], batch size: 48, lr: 1.95e-02, grad_scale: 8.0 2023-03-08 19:38:11,925 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.42 vs. limit=2.0 2023-03-08 19:38:36,393 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18316.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:38:41,806 INFO [train.py:898] (3/4) Epoch 6, batch 150, loss[loss=0.1888, simple_loss=0.267, pruned_loss=0.0553, over 18413.00 frames. ], tot_loss[loss=0.2283, simple_loss=0.3045, pruned_loss=0.07609, over 1914530.60 frames. ], batch size: 42, lr: 1.94e-02, grad_scale: 8.0 2023-03-08 19:38:48,542 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.997e+02 4.126e+02 4.935e+02 6.131e+02 1.362e+03, threshold=9.869e+02, percent-clipped=2.0 2023-03-08 19:39:22,901 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18356.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:39:40,014 INFO [train.py:898] (3/4) Epoch 6, batch 200, loss[loss=0.2571, simple_loss=0.3335, pruned_loss=0.0903, over 16203.00 frames. ], tot_loss[loss=0.2294, simple_loss=0.3056, pruned_loss=0.07665, over 2279531.33 frames. ], batch size: 94, lr: 1.94e-02, grad_scale: 8.0 2023-03-08 19:39:57,817 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18386.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:40:19,056 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=18404.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:40:39,255 INFO [train.py:898] (3/4) Epoch 6, batch 250, loss[loss=0.1865, simple_loss=0.2652, pruned_loss=0.05387, over 18252.00 frames. ], tot_loss[loss=0.2274, simple_loss=0.3037, pruned_loss=0.07554, over 2570030.52 frames. ], batch size: 45, lr: 1.94e-02, grad_scale: 8.0 2023-03-08 19:40:46,045 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.187e+02 4.523e+02 5.806e+02 6.957e+02 1.437e+03, threshold=1.161e+03, percent-clipped=4.0 2023-03-08 19:40:54,350 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=18434.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:40:58,580 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7102, 1.7552, 3.1515, 2.6041, 3.5942, 5.1199, 4.2337, 4.3031], device='cuda:3'), covar=tensor([0.0502, 0.1154, 0.1128, 0.0766, 0.1029, 0.0033, 0.0275, 0.0181], device='cuda:3'), in_proj_covar=tensor([0.0162, 0.0211, 0.0190, 0.0198, 0.0291, 0.0115, 0.0181, 0.0149], device='cuda:3'), out_proj_covar=tensor([1.1793e-04, 1.5410e-04, 1.4665e-04, 1.3419e-04, 2.1324e-04, 7.7309e-05, 1.3111e-04, 1.0859e-04], device='cuda:3') 2023-03-08 19:41:04,190 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8714, 1.8884, 3.1945, 2.7356, 3.7448, 5.1739, 4.2755, 4.2438], device='cuda:3'), covar=tensor([0.0457, 0.1068, 0.0984, 0.0702, 0.0974, 0.0029, 0.0272, 0.0197], device='cuda:3'), in_proj_covar=tensor([0.0162, 0.0210, 0.0189, 0.0198, 0.0290, 0.0115, 0.0181, 0.0148], device='cuda:3'), out_proj_covar=tensor([1.1754e-04, 1.5355e-04, 1.4613e-04, 1.3366e-04, 2.1231e-04, 7.6994e-05, 1.3070e-04, 1.0817e-04], device='cuda:3') 2023-03-08 19:41:07,834 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=5.82 vs. limit=5.0 2023-03-08 19:41:38,853 INFO [train.py:898] (3/4) Epoch 6, batch 300, loss[loss=0.229, simple_loss=0.3127, pruned_loss=0.07268, over 18470.00 frames. ], tot_loss[loss=0.2265, simple_loss=0.3031, pruned_loss=0.07497, over 2792856.18 frames. ], batch size: 53, lr: 1.94e-02, grad_scale: 8.0 2023-03-08 19:42:01,351 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=18490.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:42:22,451 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=18508.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 19:42:36,902 INFO [train.py:898] (3/4) Epoch 6, batch 350, loss[loss=0.2379, simple_loss=0.3192, pruned_loss=0.07829, over 18587.00 frames. ], tot_loss[loss=0.2266, simple_loss=0.3036, pruned_loss=0.07482, over 2970668.34 frames. ], batch size: 54, lr: 1.93e-02, grad_scale: 8.0 2023-03-08 19:42:44,091 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.993e+02 4.095e+02 4.916e+02 6.743e+02 1.094e+03, threshold=9.831e+02, percent-clipped=0.0 2023-03-08 19:42:47,457 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18530.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:42:48,600 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18531.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:43:11,062 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18551.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:43:31,775 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18569.0, num_to_drop=1, layers_to_drop={3} 2023-03-08 19:43:33,751 INFO [train.py:898] (3/4) Epoch 6, batch 400, loss[loss=0.2426, simple_loss=0.3198, pruned_loss=0.08266, over 18304.00 frames. ], tot_loss[loss=0.2273, simple_loss=0.3042, pruned_loss=0.07523, over 3112121.89 frames. ], batch size: 54, lr: 1.93e-02, grad_scale: 8.0 2023-03-08 19:44:26,386 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18616.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:44:31,603 INFO [train.py:898] (3/4) Epoch 6, batch 450, loss[loss=0.2267, simple_loss=0.3148, pruned_loss=0.06933, over 18587.00 frames. ], tot_loss[loss=0.2284, simple_loss=0.3054, pruned_loss=0.07566, over 3218257.38 frames. ], batch size: 54, lr: 1.93e-02, grad_scale: 8.0 2023-03-08 19:44:38,640 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.446e+02 4.323e+02 5.172e+02 6.972e+02 1.405e+03, threshold=1.034e+03, percent-clipped=7.0 2023-03-08 19:44:39,301 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-08 19:44:58,893 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4135, 4.3782, 4.4097, 4.2143, 4.3572, 4.2279, 4.6430, 4.5638], device='cuda:3'), covar=tensor([0.0073, 0.0103, 0.0067, 0.0098, 0.0074, 0.0125, 0.0107, 0.0117], device='cuda:3'), in_proj_covar=tensor([0.0064, 0.0049, 0.0048, 0.0061, 0.0053, 0.0070, 0.0060, 0.0059], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-08 19:44:58,936 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6269, 4.0432, 4.1800, 3.1316, 3.3231, 3.3576, 2.3589, 1.8904], device='cuda:3'), covar=tensor([0.0157, 0.0191, 0.0048, 0.0294, 0.0342, 0.0157, 0.0735, 0.1020], device='cuda:3'), in_proj_covar=tensor([0.0041, 0.0039, 0.0033, 0.0048, 0.0064, 0.0041, 0.0061, 0.0068], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004], device='cuda:3') 2023-03-08 19:45:20,700 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=18664.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:45:29,551 INFO [train.py:898] (3/4) Epoch 6, batch 500, loss[loss=0.1829, simple_loss=0.2587, pruned_loss=0.0535, over 18189.00 frames. ], tot_loss[loss=0.2276, simple_loss=0.3045, pruned_loss=0.0754, over 3313888.19 frames. ], batch size: 44, lr: 1.93e-02, grad_scale: 8.0 2023-03-08 19:45:33,138 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=18674.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:46:28,535 INFO [train.py:898] (3/4) Epoch 6, batch 550, loss[loss=0.224, simple_loss=0.3087, pruned_loss=0.06967, over 18569.00 frames. ], tot_loss[loss=0.2282, simple_loss=0.3045, pruned_loss=0.07595, over 3372148.86 frames. ], batch size: 54, lr: 1.92e-02, grad_scale: 8.0 2023-03-08 19:46:30,212 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7941, 3.5155, 1.7964, 4.3471, 3.1273, 4.8110, 2.1262, 4.0690], device='cuda:3'), covar=tensor([0.0415, 0.0753, 0.1479, 0.0423, 0.0819, 0.0109, 0.1145, 0.0276], device='cuda:3'), in_proj_covar=tensor([0.0154, 0.0191, 0.0162, 0.0174, 0.0168, 0.0133, 0.0171, 0.0159], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 19:46:35,267 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.775e+02 4.304e+02 5.425e+02 6.689e+02 1.717e+03, threshold=1.085e+03, percent-clipped=5.0 2023-03-08 19:46:45,276 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18735.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:47:26,404 INFO [train.py:898] (3/4) Epoch 6, batch 600, loss[loss=0.2243, simple_loss=0.2995, pruned_loss=0.07452, over 18280.00 frames. ], tot_loss[loss=0.2281, simple_loss=0.3051, pruned_loss=0.07557, over 3426196.20 frames. ], batch size: 49, lr: 1.92e-02, grad_scale: 8.0 2023-03-08 19:47:47,459 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7499, 3.7778, 3.8193, 3.7323, 3.7166, 3.6393, 3.9724, 3.9412], device='cuda:3'), covar=tensor([0.0089, 0.0093, 0.0089, 0.0102, 0.0092, 0.0140, 0.0080, 0.0105], device='cuda:3'), in_proj_covar=tensor([0.0067, 0.0050, 0.0050, 0.0064, 0.0054, 0.0073, 0.0062, 0.0061], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-08 19:48:25,633 INFO [train.py:898] (3/4) Epoch 6, batch 650, loss[loss=0.2021, simple_loss=0.2782, pruned_loss=0.063, over 18483.00 frames. ], tot_loss[loss=0.2262, simple_loss=0.3031, pruned_loss=0.0746, over 3464359.93 frames. ], batch size: 44, lr: 1.92e-02, grad_scale: 8.0 2023-03-08 19:48:26,008 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5410, 5.2129, 5.2242, 5.0521, 4.6770, 5.0023, 4.2691, 4.9936], device='cuda:3'), covar=tensor([0.0281, 0.0292, 0.0205, 0.0284, 0.0497, 0.0271, 0.1512, 0.0282], device='cuda:3'), in_proj_covar=tensor([0.0132, 0.0177, 0.0161, 0.0162, 0.0170, 0.0177, 0.0247, 0.0155], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0006, 0.0005], device='cuda:3') 2023-03-08 19:48:33,820 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.489e+02 4.300e+02 5.176e+02 5.986e+02 1.905e+03, threshold=1.035e+03, percent-clipped=4.0 2023-03-08 19:48:37,508 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18830.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:48:38,664 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18831.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:48:56,045 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18846.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:49:17,269 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18864.0, num_to_drop=1, layers_to_drop={3} 2023-03-08 19:49:25,150 INFO [train.py:898] (3/4) Epoch 6, batch 700, loss[loss=0.248, simple_loss=0.3224, pruned_loss=0.0868, over 18464.00 frames. ], tot_loss[loss=0.2256, simple_loss=0.3026, pruned_loss=0.0743, over 3490893.97 frames. ], batch size: 59, lr: 1.92e-02, grad_scale: 8.0 2023-03-08 19:49:33,678 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=18878.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:49:35,247 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=18879.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:50:23,144 INFO [train.py:898] (3/4) Epoch 6, batch 750, loss[loss=0.2259, simple_loss=0.306, pruned_loss=0.07289, over 18567.00 frames. ], tot_loss[loss=0.2263, simple_loss=0.3032, pruned_loss=0.07472, over 3514748.26 frames. ], batch size: 54, lr: 1.91e-02, grad_scale: 8.0 2023-03-08 19:50:29,877 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.737e+02 4.561e+02 5.480e+02 6.846e+02 1.883e+03, threshold=1.096e+03, percent-clipped=6.0 2023-03-08 19:50:33,967 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=18930.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:50:47,411 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.65 vs. limit=2.0 2023-03-08 19:51:21,252 INFO [train.py:898] (3/4) Epoch 6, batch 800, loss[loss=0.2033, simple_loss=0.277, pruned_loss=0.06476, over 18497.00 frames. ], tot_loss[loss=0.2261, simple_loss=0.3034, pruned_loss=0.07447, over 3534516.23 frames. ], batch size: 47, lr: 1.91e-02, grad_scale: 8.0 2023-03-08 19:51:46,493 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18991.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:51:54,063 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5917, 3.4279, 4.0246, 2.8846, 3.5494, 2.4179, 2.4955, 2.4883], device='cuda:3'), covar=tensor([0.0569, 0.0436, 0.0073, 0.0340, 0.0508, 0.1790, 0.1576, 0.1020], device='cuda:3'), in_proj_covar=tensor([0.0169, 0.0178, 0.0086, 0.0141, 0.0196, 0.0224, 0.0211, 0.0185], device='cuda:3'), out_proj_covar=tensor([1.6045e-04, 1.7521e-04, 8.6657e-05, 1.3777e-04, 1.9120e-04, 2.1631e-04, 2.1157e-04, 1.8287e-04], device='cuda:3') 2023-03-08 19:52:20,900 INFO [train.py:898] (3/4) Epoch 6, batch 850, loss[loss=0.2284, simple_loss=0.2997, pruned_loss=0.07857, over 18413.00 frames. ], tot_loss[loss=0.2262, simple_loss=0.3033, pruned_loss=0.07449, over 3542185.38 frames. ], batch size: 48, lr: 1.91e-02, grad_scale: 8.0 2023-03-08 19:52:28,233 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.682e+02 3.950e+02 4.754e+02 5.949e+02 2.076e+03, threshold=9.508e+02, percent-clipped=3.0 2023-03-08 19:52:31,923 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=19030.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:53:19,106 INFO [train.py:898] (3/4) Epoch 6, batch 900, loss[loss=0.2048, simple_loss=0.2866, pruned_loss=0.06148, over 18246.00 frames. ], tot_loss[loss=0.2258, simple_loss=0.303, pruned_loss=0.07424, over 3552606.50 frames. ], batch size: 45, lr: 1.91e-02, grad_scale: 8.0 2023-03-08 19:53:38,865 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2516, 5.2193, 4.6842, 5.1874, 5.1186, 4.5587, 5.1373, 4.8002], device='cuda:3'), covar=tensor([0.0414, 0.0393, 0.1458, 0.0605, 0.0498, 0.0452, 0.0297, 0.0841], device='cuda:3'), in_proj_covar=tensor([0.0322, 0.0368, 0.0514, 0.0283, 0.0264, 0.0336, 0.0341, 0.0453], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-08 19:54:07,768 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5512, 3.0866, 1.8488, 4.3056, 2.8137, 4.5470, 2.1128, 3.8510], device='cuda:3'), covar=tensor([0.0511, 0.0934, 0.1533, 0.0367, 0.1024, 0.0205, 0.1280, 0.0356], device='cuda:3'), in_proj_covar=tensor([0.0157, 0.0193, 0.0165, 0.0176, 0.0170, 0.0138, 0.0175, 0.0163], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 19:54:17,490 INFO [train.py:898] (3/4) Epoch 6, batch 950, loss[loss=0.2033, simple_loss=0.2785, pruned_loss=0.06406, over 18539.00 frames. ], tot_loss[loss=0.2251, simple_loss=0.3026, pruned_loss=0.07386, over 3565305.90 frames. ], batch size: 49, lr: 1.90e-02, grad_scale: 8.0 2023-03-08 19:54:24,265 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.702e+02 4.325e+02 5.195e+02 6.173e+02 1.123e+03, threshold=1.039e+03, percent-clipped=4.0 2023-03-08 19:54:26,846 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=19129.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 19:54:47,357 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=19146.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:55:08,075 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=19164.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 19:55:15,683 INFO [train.py:898] (3/4) Epoch 6, batch 1000, loss[loss=0.2087, simple_loss=0.2862, pruned_loss=0.06559, over 18287.00 frames. ], tot_loss[loss=0.225, simple_loss=0.3024, pruned_loss=0.07378, over 3575958.34 frames. ], batch size: 47, lr: 1.90e-02, grad_scale: 8.0 2023-03-08 19:55:35,872 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8498, 4.7067, 4.8371, 4.6740, 4.5864, 4.6368, 5.1455, 5.0390], device='cuda:3'), covar=tensor([0.0063, 0.0074, 0.0068, 0.0082, 0.0070, 0.0101, 0.0062, 0.0082], device='cuda:3'), in_proj_covar=tensor([0.0067, 0.0050, 0.0049, 0.0063, 0.0055, 0.0073, 0.0062, 0.0060], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-08 19:55:38,223 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=19190.0, num_to_drop=1, layers_to_drop={3} 2023-03-08 19:55:42,581 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=19194.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:56:05,355 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=19212.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 19:56:15,294 INFO [train.py:898] (3/4) Epoch 6, batch 1050, loss[loss=0.2134, simple_loss=0.301, pruned_loss=0.06288, over 18614.00 frames. ], tot_loss[loss=0.2247, simple_loss=0.3023, pruned_loss=0.07352, over 3587467.78 frames. ], batch size: 52, lr: 1.90e-02, grad_scale: 8.0 2023-03-08 19:56:21,988 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.826e+02 4.167e+02 5.198e+02 6.760e+02 1.282e+03, threshold=1.040e+03, percent-clipped=3.0 2023-03-08 19:56:38,390 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-08 19:57:14,153 INFO [train.py:898] (3/4) Epoch 6, batch 1100, loss[loss=0.253, simple_loss=0.3233, pruned_loss=0.09139, over 18294.00 frames. ], tot_loss[loss=0.2239, simple_loss=0.3017, pruned_loss=0.07307, over 3596299.85 frames. ], batch size: 54, lr: 1.90e-02, grad_scale: 8.0 2023-03-08 19:57:15,957 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-08 19:57:31,192 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=19286.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:57:47,546 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4444, 5.3160, 4.8799, 5.4125, 5.2498, 4.7091, 5.2788, 4.9918], device='cuda:3'), covar=tensor([0.0439, 0.0410, 0.1534, 0.0568, 0.0503, 0.0489, 0.0369, 0.0836], device='cuda:3'), in_proj_covar=tensor([0.0321, 0.0363, 0.0513, 0.0279, 0.0265, 0.0335, 0.0343, 0.0442], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-08 19:58:13,169 INFO [train.py:898] (3/4) Epoch 6, batch 1150, loss[loss=0.2227, simple_loss=0.3072, pruned_loss=0.06908, over 18356.00 frames. ], tot_loss[loss=0.2243, simple_loss=0.3018, pruned_loss=0.07342, over 3584412.98 frames. ], batch size: 55, lr: 1.90e-02, grad_scale: 8.0 2023-03-08 19:58:20,057 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.317e+02 3.874e+02 4.905e+02 6.066e+02 2.100e+03, threshold=9.811e+02, percent-clipped=2.0 2023-03-08 19:58:23,671 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=19330.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:59:11,798 INFO [train.py:898] (3/4) Epoch 6, batch 1200, loss[loss=0.2289, simple_loss=0.3073, pruned_loss=0.07526, over 17305.00 frames. ], tot_loss[loss=0.2253, simple_loss=0.3029, pruned_loss=0.07382, over 3575945.13 frames. ], batch size: 78, lr: 1.89e-02, grad_scale: 8.0 2023-03-08 19:59:20,008 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=19378.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:59:25,180 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.96 vs. limit=2.0 2023-03-08 19:59:37,959 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=19394.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 19:59:54,018 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1770, 5.2193, 3.0386, 4.9756, 4.8792, 5.2121, 4.9215, 2.6818], device='cuda:3'), covar=tensor([0.0121, 0.0055, 0.0598, 0.0063, 0.0067, 0.0059, 0.0098, 0.0954], device='cuda:3'), in_proj_covar=tensor([0.0065, 0.0053, 0.0085, 0.0067, 0.0064, 0.0053, 0.0068, 0.0089], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0004, 0.0004, 0.0003, 0.0003, 0.0004, 0.0005], device='cuda:3') 2023-03-08 20:00:10,013 INFO [train.py:898] (3/4) Epoch 6, batch 1250, loss[loss=0.2257, simple_loss=0.3101, pruned_loss=0.07064, over 18231.00 frames. ], tot_loss[loss=0.2245, simple_loss=0.3019, pruned_loss=0.07354, over 3581981.66 frames. ], batch size: 60, lr: 1.89e-02, grad_scale: 8.0 2023-03-08 20:00:16,821 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.768e+02 4.335e+02 5.425e+02 6.798e+02 1.467e+03, threshold=1.085e+03, percent-clipped=6.0 2023-03-08 20:00:19,670 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.0831, 3.5573, 5.0528, 3.8617, 2.9304, 2.6555, 4.1672, 5.1415], device='cuda:3'), covar=tensor([0.1094, 0.1579, 0.0049, 0.0393, 0.0894, 0.1186, 0.0323, 0.0115], device='cuda:3'), in_proj_covar=tensor([0.0134, 0.0191, 0.0073, 0.0143, 0.0162, 0.0165, 0.0143, 0.0091], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002], device='cuda:3') 2023-03-08 20:00:49,590 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=19455.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:00:53,323 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.06 vs. limit=5.0 2023-03-08 20:01:08,435 INFO [train.py:898] (3/4) Epoch 6, batch 1300, loss[loss=0.1818, simple_loss=0.254, pruned_loss=0.05478, over 18451.00 frames. ], tot_loss[loss=0.2248, simple_loss=0.3021, pruned_loss=0.07379, over 3580810.65 frames. ], batch size: 43, lr: 1.89e-02, grad_scale: 8.0 2023-03-08 20:01:25,189 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=19485.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 20:02:07,610 INFO [train.py:898] (3/4) Epoch 6, batch 1350, loss[loss=0.2134, simple_loss=0.2921, pruned_loss=0.06741, over 18541.00 frames. ], tot_loss[loss=0.2255, simple_loss=0.3023, pruned_loss=0.07436, over 3565817.73 frames. ], batch size: 49, lr: 1.89e-02, grad_scale: 8.0 2023-03-08 20:02:15,076 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.530e+02 3.979e+02 4.823e+02 6.493e+02 1.049e+03, threshold=9.645e+02, percent-clipped=0.0 2023-03-08 20:03:05,911 INFO [train.py:898] (3/4) Epoch 6, batch 1400, loss[loss=0.211, simple_loss=0.2847, pruned_loss=0.06871, over 18352.00 frames. ], tot_loss[loss=0.2248, simple_loss=0.3015, pruned_loss=0.07405, over 3571304.13 frames. ], batch size: 46, lr: 1.88e-02, grad_scale: 8.0 2023-03-08 20:03:23,795 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=19586.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:03:33,936 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.49 vs. limit=2.0 2023-03-08 20:04:01,601 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=19618.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:04:04,685 INFO [train.py:898] (3/4) Epoch 6, batch 1450, loss[loss=0.2012, simple_loss=0.2765, pruned_loss=0.06299, over 18260.00 frames. ], tot_loss[loss=0.2224, simple_loss=0.2993, pruned_loss=0.07271, over 3582711.35 frames. ], batch size: 47, lr: 1.88e-02, grad_scale: 8.0 2023-03-08 20:04:11,634 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.837e+02 4.168e+02 4.993e+02 6.128e+02 1.216e+03, threshold=9.987e+02, percent-clipped=6.0 2023-03-08 20:04:20,345 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=19634.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:05:03,112 INFO [train.py:898] (3/4) Epoch 6, batch 1500, loss[loss=0.2507, simple_loss=0.326, pruned_loss=0.08766, over 18468.00 frames. ], tot_loss[loss=0.2227, simple_loss=0.2998, pruned_loss=0.07276, over 3583259.10 frames. ], batch size: 59, lr: 1.88e-02, grad_scale: 8.0 2023-03-08 20:05:12,454 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=19679.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:06:01,661 INFO [train.py:898] (3/4) Epoch 6, batch 1550, loss[loss=0.2375, simple_loss=0.3264, pruned_loss=0.07426, over 17020.00 frames. ], tot_loss[loss=0.2217, simple_loss=0.2993, pruned_loss=0.07207, over 3589461.95 frames. ], batch size: 78, lr: 1.88e-02, grad_scale: 8.0 2023-03-08 20:06:09,096 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.820e+02 4.112e+02 5.168e+02 6.335e+02 1.578e+03, threshold=1.034e+03, percent-clipped=4.0 2023-03-08 20:06:36,310 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=19750.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:07:01,070 INFO [train.py:898] (3/4) Epoch 6, batch 1600, loss[loss=0.245, simple_loss=0.3131, pruned_loss=0.08844, over 18568.00 frames. ], tot_loss[loss=0.2229, simple_loss=0.3003, pruned_loss=0.07272, over 3580573.39 frames. ], batch size: 54, lr: 1.87e-02, grad_scale: 8.0 2023-03-08 20:07:17,705 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=19785.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 20:07:23,838 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7887, 4.5707, 4.7977, 4.4877, 4.3970, 4.6331, 5.0252, 4.7998], device='cuda:3'), covar=tensor([0.0069, 0.0124, 0.0115, 0.0113, 0.0109, 0.0131, 0.0111, 0.0145], device='cuda:3'), in_proj_covar=tensor([0.0066, 0.0051, 0.0050, 0.0064, 0.0054, 0.0073, 0.0063, 0.0060], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-08 20:07:58,558 INFO [train.py:898] (3/4) Epoch 6, batch 1650, loss[loss=0.2535, simple_loss=0.3342, pruned_loss=0.08646, over 17999.00 frames. ], tot_loss[loss=0.2236, simple_loss=0.3011, pruned_loss=0.073, over 3590690.05 frames. ], batch size: 65, lr: 1.87e-02, grad_scale: 8.0 2023-03-08 20:08:06,543 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.131e+02 4.738e+02 5.632e+02 7.460e+02 1.782e+03, threshold=1.126e+03, percent-clipped=8.0 2023-03-08 20:08:07,178 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.48 vs. limit=5.0 2023-03-08 20:08:13,471 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=19833.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 20:08:56,802 INFO [train.py:898] (3/4) Epoch 6, batch 1700, loss[loss=0.2726, simple_loss=0.3389, pruned_loss=0.1032, over 18466.00 frames. ], tot_loss[loss=0.2246, simple_loss=0.3019, pruned_loss=0.07364, over 3592540.99 frames. ], batch size: 59, lr: 1.87e-02, grad_scale: 8.0 2023-03-08 20:09:45,572 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2503, 2.4627, 2.2656, 2.6468, 3.3304, 3.3070, 2.5898, 2.7723], device='cuda:3'), covar=tensor([0.0213, 0.0290, 0.0696, 0.0426, 0.0286, 0.0139, 0.0446, 0.0396], device='cuda:3'), in_proj_covar=tensor([0.0100, 0.0075, 0.0132, 0.0109, 0.0078, 0.0060, 0.0100, 0.0104], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 20:09:55,328 INFO [train.py:898] (3/4) Epoch 6, batch 1750, loss[loss=0.2139, simple_loss=0.3033, pruned_loss=0.06227, over 18362.00 frames. ], tot_loss[loss=0.2249, simple_loss=0.3026, pruned_loss=0.07366, over 3595381.27 frames. ], batch size: 55, lr: 1.87e-02, grad_scale: 8.0 2023-03-08 20:10:02,903 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.466e+02 3.770e+02 4.789e+02 6.100e+02 1.481e+03, threshold=9.579e+02, percent-clipped=1.0 2023-03-08 20:10:39,502 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4748, 3.2928, 1.6696, 4.1811, 2.8012, 4.3499, 1.7967, 3.6635], device='cuda:3'), covar=tensor([0.0516, 0.0877, 0.1667, 0.0396, 0.0991, 0.0200, 0.1404, 0.0365], device='cuda:3'), in_proj_covar=tensor([0.0160, 0.0195, 0.0168, 0.0177, 0.0167, 0.0144, 0.0174, 0.0167], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 20:10:45,100 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5634, 3.4419, 1.7844, 4.2648, 2.8560, 4.5228, 1.9430, 3.9618], device='cuda:3'), covar=tensor([0.0556, 0.0859, 0.1638, 0.0439, 0.0965, 0.0196, 0.1355, 0.0356], device='cuda:3'), in_proj_covar=tensor([0.0160, 0.0195, 0.0168, 0.0178, 0.0167, 0.0144, 0.0174, 0.0167], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 20:10:53,715 INFO [train.py:898] (3/4) Epoch 6, batch 1800, loss[loss=0.2423, simple_loss=0.315, pruned_loss=0.08479, over 17826.00 frames. ], tot_loss[loss=0.224, simple_loss=0.3015, pruned_loss=0.07328, over 3593319.04 frames. ], batch size: 70, lr: 1.87e-02, grad_scale: 8.0 2023-03-08 20:10:57,345 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=19974.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:11:06,320 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4315, 3.3801, 1.9153, 4.1601, 2.8457, 4.4493, 2.0366, 4.0142], device='cuda:3'), covar=tensor([0.0522, 0.0867, 0.1426, 0.0371, 0.0906, 0.0131, 0.1236, 0.0286], device='cuda:3'), in_proj_covar=tensor([0.0159, 0.0195, 0.0167, 0.0177, 0.0167, 0.0143, 0.0173, 0.0166], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 20:11:27,775 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-08 20:11:34,653 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-08 20:11:45,965 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-08 20:11:55,814 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20020.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:11:56,566 INFO [train.py:898] (3/4) Epoch 6, batch 1850, loss[loss=0.2337, simple_loss=0.3056, pruned_loss=0.08088, over 18363.00 frames. ], tot_loss[loss=0.2229, simple_loss=0.3, pruned_loss=0.07291, over 3597061.63 frames. ], batch size: 50, lr: 1.86e-02, grad_scale: 8.0 2023-03-08 20:12:03,174 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.737e+02 4.293e+02 5.407e+02 6.655e+02 1.124e+03, threshold=1.081e+03, percent-clipped=3.0 2023-03-08 20:12:08,183 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2996, 3.1570, 1.7238, 3.9831, 2.7268, 4.1126, 2.3424, 3.5566], device='cuda:3'), covar=tensor([0.0514, 0.0824, 0.1474, 0.0357, 0.0869, 0.0156, 0.0936, 0.0321], device='cuda:3'), in_proj_covar=tensor([0.0160, 0.0194, 0.0166, 0.0178, 0.0168, 0.0143, 0.0172, 0.0164], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 20:12:31,065 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20050.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:12:55,163 INFO [train.py:898] (3/4) Epoch 6, batch 1900, loss[loss=0.202, simple_loss=0.2721, pruned_loss=0.06588, over 18418.00 frames. ], tot_loss[loss=0.2239, simple_loss=0.301, pruned_loss=0.0734, over 3585596.88 frames. ], batch size: 43, lr: 1.86e-02, grad_scale: 8.0 2023-03-08 20:13:07,122 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20081.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:13:17,151 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20089.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:13:27,666 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=20098.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:13:44,380 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20112.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:13:54,337 INFO [train.py:898] (3/4) Epoch 6, batch 1950, loss[loss=0.2341, simple_loss=0.3177, pruned_loss=0.07521, over 18490.00 frames. ], tot_loss[loss=0.2239, simple_loss=0.3012, pruned_loss=0.0733, over 3585604.07 frames. ], batch size: 59, lr: 1.86e-02, grad_scale: 8.0 2023-03-08 20:14:01,199 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.918e+02 4.095e+02 5.447e+02 6.778e+02 1.959e+03, threshold=1.089e+03, percent-clipped=6.0 2023-03-08 20:14:04,934 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4773, 4.4679, 2.4742, 4.4223, 5.4100, 2.7114, 4.0183, 3.7075], device='cuda:3'), covar=tensor([0.0046, 0.0685, 0.1437, 0.0412, 0.0035, 0.1073, 0.0509, 0.0690], device='cuda:3'), in_proj_covar=tensor([0.0090, 0.0178, 0.0181, 0.0176, 0.0075, 0.0167, 0.0189, 0.0185], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-08 20:14:18,217 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3668, 2.4518, 2.3480, 2.8139, 3.3790, 3.2004, 2.5434, 2.7945], device='cuda:3'), covar=tensor([0.0195, 0.0353, 0.0661, 0.0357, 0.0180, 0.0155, 0.0415, 0.0261], device='cuda:3'), in_proj_covar=tensor([0.0103, 0.0078, 0.0135, 0.0111, 0.0080, 0.0062, 0.0100, 0.0103], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 20:14:28,670 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20150.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:14:42,973 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5366, 4.3341, 4.5604, 4.3359, 4.2342, 4.4292, 4.7399, 4.6561], device='cuda:3'), covar=tensor([0.0071, 0.0098, 0.0077, 0.0114, 0.0104, 0.0122, 0.0130, 0.0142], device='cuda:3'), in_proj_covar=tensor([0.0065, 0.0050, 0.0049, 0.0063, 0.0053, 0.0073, 0.0062, 0.0060], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-08 20:14:52,816 INFO [train.py:898] (3/4) Epoch 6, batch 2000, loss[loss=0.2084, simple_loss=0.278, pruned_loss=0.06941, over 18425.00 frames. ], tot_loss[loss=0.2227, simple_loss=0.3001, pruned_loss=0.07263, over 3592505.40 frames. ], batch size: 42, lr: 1.86e-02, grad_scale: 8.0 2023-03-08 20:14:55,575 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20173.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:15:08,338 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.63 vs. limit=2.0 2023-03-08 20:15:32,138 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20204.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:15:51,543 INFO [train.py:898] (3/4) Epoch 6, batch 2050, loss[loss=0.2144, simple_loss=0.2943, pruned_loss=0.0673, over 18360.00 frames. ], tot_loss[loss=0.2239, simple_loss=0.3014, pruned_loss=0.07323, over 3590821.59 frames. ], batch size: 46, lr: 1.86e-02, grad_scale: 4.0 2023-03-08 20:15:56,387 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6969, 1.7756, 3.0640, 2.7190, 3.6530, 5.1453, 4.3490, 4.3950], device='cuda:3'), covar=tensor([0.0555, 0.1274, 0.1115, 0.0792, 0.1067, 0.0036, 0.0264, 0.0175], device='cuda:3'), in_proj_covar=tensor([0.0170, 0.0222, 0.0206, 0.0204, 0.0302, 0.0123, 0.0191, 0.0153], device='cuda:3'), out_proj_covar=tensor([1.1985e-04, 1.5794e-04, 1.5459e-04, 1.3393e-04, 2.1563e-04, 8.2048e-05, 1.3230e-04, 1.0803e-04], device='cuda:3') 2023-03-08 20:15:59,260 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.572e+02 4.223e+02 5.010e+02 6.268e+02 1.616e+03, threshold=1.002e+03, percent-clipped=2.0 2023-03-08 20:16:10,137 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2940, 2.7270, 2.4131, 2.9890, 3.4754, 3.2785, 2.9443, 3.0204], device='cuda:3'), covar=tensor([0.0292, 0.0298, 0.0675, 0.0248, 0.0217, 0.0128, 0.0290, 0.0280], device='cuda:3'), in_proj_covar=tensor([0.0101, 0.0075, 0.0132, 0.0106, 0.0077, 0.0061, 0.0098, 0.0101], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 20:16:17,203 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4207, 6.0490, 5.3500, 5.8106, 5.5296, 5.5562, 6.0437, 6.0121], device='cuda:3'), covar=tensor([0.1189, 0.0574, 0.0489, 0.0629, 0.1346, 0.0610, 0.0450, 0.0588], device='cuda:3'), in_proj_covar=tensor([0.0436, 0.0348, 0.0276, 0.0383, 0.0533, 0.0384, 0.0451, 0.0365], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-08 20:16:43,523 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20265.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:16:49,846 INFO [train.py:898] (3/4) Epoch 6, batch 2100, loss[loss=0.2265, simple_loss=0.3073, pruned_loss=0.07289, over 18374.00 frames. ], tot_loss[loss=0.2251, simple_loss=0.3026, pruned_loss=0.07381, over 3586659.17 frames. ], batch size: 50, lr: 1.85e-02, grad_scale: 4.0 2023-03-08 20:16:53,538 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20274.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:17:35,504 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=2.13 vs. limit=2.0 2023-03-08 20:17:49,006 INFO [train.py:898] (3/4) Epoch 6, batch 2150, loss[loss=0.1965, simple_loss=0.2721, pruned_loss=0.06048, over 18499.00 frames. ], tot_loss[loss=0.2245, simple_loss=0.302, pruned_loss=0.07353, over 3584379.62 frames. ], batch size: 47, lr: 1.85e-02, grad_scale: 4.0 2023-03-08 20:17:50,329 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=20322.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:17:56,866 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.681e+02 4.510e+02 5.215e+02 6.724e+02 1.670e+03, threshold=1.043e+03, percent-clipped=8.0 2023-03-08 20:18:47,256 INFO [train.py:898] (3/4) Epoch 6, batch 2200, loss[loss=0.2309, simple_loss=0.3132, pruned_loss=0.07424, over 18565.00 frames. ], tot_loss[loss=0.2252, simple_loss=0.3028, pruned_loss=0.07384, over 3587656.69 frames. ], batch size: 54, lr: 1.85e-02, grad_scale: 4.0 2023-03-08 20:18:53,240 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20376.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:19:07,082 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20388.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:19:43,522 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20418.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:19:46,527 INFO [train.py:898] (3/4) Epoch 6, batch 2250, loss[loss=0.2221, simple_loss=0.3072, pruned_loss=0.06853, over 18226.00 frames. ], tot_loss[loss=0.2243, simple_loss=0.3021, pruned_loss=0.07324, over 3601549.68 frames. ], batch size: 60, lr: 1.85e-02, grad_scale: 4.0 2023-03-08 20:19:54,694 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.728e+02 4.214e+02 4.964e+02 6.088e+02 1.302e+03, threshold=9.929e+02, percent-clipped=3.0 2023-03-08 20:20:13,870 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20445.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:20:18,781 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20449.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:20:41,596 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20468.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:20:41,749 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8811, 4.9539, 4.9890, 4.9172, 4.7388, 4.8197, 5.2268, 5.2076], device='cuda:3'), covar=tensor([0.0053, 0.0057, 0.0060, 0.0066, 0.0061, 0.0092, 0.0052, 0.0072], device='cuda:3'), in_proj_covar=tensor([0.0066, 0.0048, 0.0048, 0.0063, 0.0053, 0.0072, 0.0061, 0.0060], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-08 20:20:44,843 INFO [train.py:898] (3/4) Epoch 6, batch 2300, loss[loss=0.2017, simple_loss=0.2832, pruned_loss=0.06006, over 18544.00 frames. ], tot_loss[loss=0.2239, simple_loss=0.3017, pruned_loss=0.07309, over 3601312.39 frames. ], batch size: 49, lr: 1.84e-02, grad_scale: 4.0 2023-03-08 20:20:54,751 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20479.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:21:04,841 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5606, 1.9385, 2.7824, 2.6556, 3.3646, 4.9572, 4.1308, 4.0385], device='cuda:3'), covar=tensor([0.0618, 0.1226, 0.1221, 0.0833, 0.1229, 0.0039, 0.0283, 0.0204], device='cuda:3'), in_proj_covar=tensor([0.0173, 0.0224, 0.0209, 0.0208, 0.0303, 0.0126, 0.0192, 0.0155], device='cuda:3'), out_proj_covar=tensor([1.2161e-04, 1.5850e-04, 1.5679e-04, 1.3628e-04, 2.1536e-04, 8.4023e-05, 1.3282e-04, 1.0981e-04], device='cuda:3') 2023-03-08 20:21:38,148 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7147, 5.2165, 5.1916, 5.1514, 4.8629, 5.0724, 4.3754, 5.0231], device='cuda:3'), covar=tensor([0.0216, 0.0258, 0.0215, 0.0276, 0.0327, 0.0223, 0.1201, 0.0263], device='cuda:3'), in_proj_covar=tensor([0.0136, 0.0180, 0.0163, 0.0165, 0.0170, 0.0179, 0.0248, 0.0163], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0006, 0.0005], device='cuda:3') 2023-03-08 20:21:43,768 INFO [train.py:898] (3/4) Epoch 6, batch 2350, loss[loss=0.1818, simple_loss=0.2597, pruned_loss=0.05195, over 18270.00 frames. ], tot_loss[loss=0.2244, simple_loss=0.3017, pruned_loss=0.07357, over 3601014.45 frames. ], batch size: 45, lr: 1.84e-02, grad_scale: 4.0 2023-03-08 20:21:52,077 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.562e+02 3.728e+02 4.764e+02 5.894e+02 1.500e+03, threshold=9.528e+02, percent-clipped=4.0 2023-03-08 20:22:05,998 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9986, 2.4789, 2.0454, 2.6126, 3.2120, 3.0666, 2.6394, 2.6932], device='cuda:3'), covar=tensor([0.0251, 0.0320, 0.0845, 0.0414, 0.0249, 0.0164, 0.0407, 0.0387], device='cuda:3'), in_proj_covar=tensor([0.0103, 0.0078, 0.0137, 0.0110, 0.0079, 0.0063, 0.0102, 0.0105], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 20:22:29,279 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20560.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:22:42,381 INFO [train.py:898] (3/4) Epoch 6, batch 2400, loss[loss=0.2486, simple_loss=0.3207, pruned_loss=0.0883, over 16067.00 frames. ], tot_loss[loss=0.2242, simple_loss=0.3014, pruned_loss=0.07352, over 3585520.88 frames. ], batch size: 95, lr: 1.84e-02, grad_scale: 8.0 2023-03-08 20:22:52,609 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0403, 3.9913, 5.0981, 3.5105, 4.3551, 2.7338, 2.9733, 2.3638], device='cuda:3'), covar=tensor([0.0591, 0.0477, 0.0041, 0.0352, 0.0417, 0.1676, 0.1762, 0.1215], device='cuda:3'), in_proj_covar=tensor([0.0172, 0.0185, 0.0090, 0.0145, 0.0196, 0.0229, 0.0218, 0.0185], device='cuda:3'), out_proj_covar=tensor([1.6142e-04, 1.7977e-04, 8.9458e-05, 1.3989e-04, 1.8940e-04, 2.2008e-04, 2.1471e-04, 1.8119e-04], device='cuda:3') 2023-03-08 20:23:02,852 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20588.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:23:41,479 INFO [train.py:898] (3/4) Epoch 6, batch 2450, loss[loss=0.2088, simple_loss=0.2854, pruned_loss=0.06615, over 18566.00 frames. ], tot_loss[loss=0.2244, simple_loss=0.3017, pruned_loss=0.07348, over 3575973.36 frames. ], batch size: 45, lr: 1.84e-02, grad_scale: 8.0 2023-03-08 20:23:49,390 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.965e+02 4.013e+02 4.826e+02 5.800e+02 1.376e+03, threshold=9.653e+02, percent-clipped=2.0 2023-03-08 20:24:13,866 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20649.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:24:38,627 INFO [train.py:898] (3/4) Epoch 6, batch 2500, loss[loss=0.2299, simple_loss=0.309, pruned_loss=0.07539, over 18114.00 frames. ], tot_loss[loss=0.2247, simple_loss=0.3017, pruned_loss=0.07382, over 3576638.87 frames. ], batch size: 62, lr: 1.84e-02, grad_scale: 8.0 2023-03-08 20:24:44,501 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20676.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:25:35,729 INFO [train.py:898] (3/4) Epoch 6, batch 2550, loss[loss=0.2013, simple_loss=0.2772, pruned_loss=0.06275, over 18257.00 frames. ], tot_loss[loss=0.2237, simple_loss=0.3008, pruned_loss=0.07335, over 3589339.81 frames. ], batch size: 47, lr: 1.83e-02, grad_scale: 4.0 2023-03-08 20:25:40,321 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=20724.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:25:45,642 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.233e+02 4.696e+02 5.623e+02 7.673e+02 1.890e+03, threshold=1.125e+03, percent-clipped=13.0 2023-03-08 20:26:03,310 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20744.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:26:04,517 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20745.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:26:30,268 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20768.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:26:34,037 INFO [train.py:898] (3/4) Epoch 6, batch 2600, loss[loss=0.1983, simple_loss=0.2678, pruned_loss=0.06443, over 18422.00 frames. ], tot_loss[loss=0.2238, simple_loss=0.301, pruned_loss=0.0733, over 3588011.70 frames. ], batch size: 42, lr: 1.83e-02, grad_scale: 4.0 2023-03-08 20:26:38,088 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20774.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:26:55,274 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.42 vs. limit=2.0 2023-03-08 20:27:00,485 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=20793.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:27:23,824 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7577, 3.0884, 4.3030, 4.1055, 2.4307, 4.6864, 4.0744, 3.0119], device='cuda:3'), covar=tensor([0.0346, 0.1067, 0.0139, 0.0201, 0.1526, 0.0106, 0.0278, 0.0820], device='cuda:3'), in_proj_covar=tensor([0.0155, 0.0193, 0.0109, 0.0118, 0.0196, 0.0150, 0.0158, 0.0178], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 20:27:27,129 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=20816.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:27:32,622 INFO [train.py:898] (3/4) Epoch 6, batch 2650, loss[loss=0.2286, simple_loss=0.3153, pruned_loss=0.07102, over 18075.00 frames. ], tot_loss[loss=0.2219, simple_loss=0.2996, pruned_loss=0.07209, over 3591467.16 frames. ], batch size: 62, lr: 1.83e-02, grad_scale: 4.0 2023-03-08 20:27:43,390 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.822e+02 3.946e+02 4.764e+02 5.553e+02 1.236e+03, threshold=9.528e+02, percent-clipped=1.0 2023-03-08 20:28:19,229 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20860.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:28:31,289 INFO [train.py:898] (3/4) Epoch 6, batch 2700, loss[loss=0.2098, simple_loss=0.2888, pruned_loss=0.06539, over 18419.00 frames. ], tot_loss[loss=0.2215, simple_loss=0.2993, pruned_loss=0.07186, over 3583430.52 frames. ], batch size: 48, lr: 1.83e-02, grad_scale: 4.0 2023-03-08 20:29:14,535 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=20908.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:29:28,982 INFO [train.py:898] (3/4) Epoch 6, batch 2750, loss[loss=0.2354, simple_loss=0.3119, pruned_loss=0.07941, over 17806.00 frames. ], tot_loss[loss=0.2209, simple_loss=0.2982, pruned_loss=0.07181, over 3581265.51 frames. ], batch size: 70, lr: 1.83e-02, grad_scale: 4.0 2023-03-08 20:29:38,695 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.441e+02 4.086e+02 5.349e+02 6.220e+02 9.692e+02, threshold=1.070e+03, percent-clipped=2.0 2023-03-08 20:29:41,840 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1374, 2.4496, 2.1985, 2.6214, 3.2696, 3.2628, 2.6774, 2.9382], device='cuda:3'), covar=tensor([0.0231, 0.0271, 0.0774, 0.0405, 0.0228, 0.0180, 0.0389, 0.0282], device='cuda:3'), in_proj_covar=tensor([0.0105, 0.0081, 0.0137, 0.0113, 0.0082, 0.0066, 0.0102, 0.0102], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 20:29:57,126 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20944.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:30:14,714 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.44 vs. limit=2.0 2023-03-08 20:30:17,741 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1723, 5.2292, 2.5893, 5.0305, 4.8919, 5.3336, 4.9785, 2.2664], device='cuda:3'), covar=tensor([0.0152, 0.0082, 0.0960, 0.0095, 0.0080, 0.0072, 0.0161, 0.1562], device='cuda:3'), in_proj_covar=tensor([0.0066, 0.0056, 0.0085, 0.0069, 0.0065, 0.0055, 0.0070, 0.0091], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0004, 0.0004, 0.0003, 0.0003, 0.0004, 0.0005], device='cuda:3') 2023-03-08 20:30:27,628 INFO [train.py:898] (3/4) Epoch 6, batch 2800, loss[loss=0.2008, simple_loss=0.2893, pruned_loss=0.0561, over 18617.00 frames. ], tot_loss[loss=0.2222, simple_loss=0.2998, pruned_loss=0.07231, over 3564795.60 frames. ], batch size: 52, lr: 1.82e-02, grad_scale: 8.0 2023-03-08 20:31:09,462 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0484, 4.7775, 2.8181, 4.5525, 4.5408, 4.7444, 4.5680, 2.7270], device='cuda:3'), covar=tensor([0.0116, 0.0042, 0.0590, 0.0078, 0.0050, 0.0051, 0.0081, 0.0826], device='cuda:3'), in_proj_covar=tensor([0.0065, 0.0055, 0.0084, 0.0069, 0.0064, 0.0054, 0.0069, 0.0089], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0004, 0.0004, 0.0003, 0.0003, 0.0004, 0.0005], device='cuda:3') 2023-03-08 20:31:26,854 INFO [train.py:898] (3/4) Epoch 6, batch 2850, loss[loss=0.2208, simple_loss=0.3082, pruned_loss=0.06669, over 18309.00 frames. ], tot_loss[loss=0.2215, simple_loss=0.299, pruned_loss=0.07201, over 3568796.77 frames. ], batch size: 54, lr: 1.82e-02, grad_scale: 4.0 2023-03-08 20:31:37,651 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.172e+02 4.192e+02 4.961e+02 6.331e+02 1.118e+03, threshold=9.922e+02, percent-clipped=2.0 2023-03-08 20:31:54,457 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21044.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:31:58,043 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-08 20:32:04,007 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21052.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:32:06,328 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7185, 3.3906, 1.9970, 4.3630, 3.0691, 4.5698, 2.0543, 3.7946], device='cuda:3'), covar=tensor([0.0407, 0.0647, 0.1298, 0.0351, 0.0741, 0.0154, 0.1162, 0.0362], device='cuda:3'), in_proj_covar=tensor([0.0161, 0.0192, 0.0164, 0.0177, 0.0164, 0.0156, 0.0172, 0.0164], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 20:32:09,614 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5975, 3.1887, 1.8636, 4.2581, 2.9296, 4.4819, 1.7279, 3.6515], device='cuda:3'), covar=tensor([0.0450, 0.0752, 0.1313, 0.0374, 0.0785, 0.0171, 0.1285, 0.0360], device='cuda:3'), in_proj_covar=tensor([0.0161, 0.0192, 0.0164, 0.0178, 0.0165, 0.0157, 0.0172, 0.0164], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 20:32:10,792 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2143, 4.3363, 2.2321, 4.3832, 5.2171, 2.3093, 3.7707, 3.6660], device='cuda:3'), covar=tensor([0.0085, 0.1111, 0.1716, 0.0446, 0.0048, 0.1488, 0.0641, 0.0876], device='cuda:3'), in_proj_covar=tensor([0.0089, 0.0180, 0.0179, 0.0175, 0.0075, 0.0166, 0.0186, 0.0182], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 20:32:24,739 INFO [train.py:898] (3/4) Epoch 6, batch 2900, loss[loss=0.2257, simple_loss=0.3056, pruned_loss=0.07287, over 18242.00 frames. ], tot_loss[loss=0.2232, simple_loss=0.3004, pruned_loss=0.07303, over 3559125.75 frames. ], batch size: 60, lr: 1.82e-02, grad_scale: 4.0 2023-03-08 20:32:28,168 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21074.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:32:49,269 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=21092.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:33:14,679 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6437, 4.6059, 4.1434, 4.5613, 4.5775, 3.9342, 4.5001, 4.2298], device='cuda:3'), covar=tensor([0.0428, 0.0473, 0.1758, 0.0760, 0.0443, 0.0528, 0.0488, 0.0881], device='cuda:3'), in_proj_covar=tensor([0.0331, 0.0385, 0.0530, 0.0302, 0.0274, 0.0345, 0.0376, 0.0469], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-08 20:33:14,809 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21113.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:33:20,497 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21118.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:33:23,467 INFO [train.py:898] (3/4) Epoch 6, batch 2950, loss[loss=0.1855, simple_loss=0.2623, pruned_loss=0.05439, over 18409.00 frames. ], tot_loss[loss=0.2226, simple_loss=0.2999, pruned_loss=0.07269, over 3570409.95 frames. ], batch size: 42, lr: 1.82e-02, grad_scale: 4.0 2023-03-08 20:33:24,830 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=21122.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:33:33,497 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.673e+02 4.312e+02 5.615e+02 7.522e+02 2.010e+03, threshold=1.123e+03, percent-clipped=9.0 2023-03-08 20:33:33,912 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8318, 5.3675, 5.4084, 5.2227, 4.8841, 5.2433, 4.5307, 5.2228], device='cuda:3'), covar=tensor([0.0177, 0.0267, 0.0169, 0.0225, 0.0356, 0.0178, 0.1233, 0.0235], device='cuda:3'), in_proj_covar=tensor([0.0142, 0.0187, 0.0170, 0.0176, 0.0177, 0.0187, 0.0254, 0.0167], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0005, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-08 20:33:45,552 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.49 vs. limit=2.0 2023-03-08 20:34:22,617 INFO [train.py:898] (3/4) Epoch 6, batch 3000, loss[loss=0.2912, simple_loss=0.3466, pruned_loss=0.1179, over 12356.00 frames. ], tot_loss[loss=0.2219, simple_loss=0.2989, pruned_loss=0.07244, over 3570044.21 frames. ], batch size: 130, lr: 1.82e-02, grad_scale: 4.0 2023-03-08 20:34:22,617 INFO [train.py:923] (3/4) Computing validation loss 2023-03-08 20:34:30,815 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9059, 4.4499, 2.5162, 4.3700, 4.2417, 4.4330, 4.2650, 2.4436], device='cuda:3'), covar=tensor([0.0112, 0.0073, 0.0705, 0.0072, 0.0068, 0.0071, 0.0109, 0.0975], device='cuda:3'), in_proj_covar=tensor([0.0064, 0.0054, 0.0083, 0.0067, 0.0064, 0.0053, 0.0068, 0.0087], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0004, 0.0004, 0.0003, 0.0003, 0.0004, 0.0005], device='cuda:3') 2023-03-08 20:34:34,666 INFO [train.py:932] (3/4) Epoch 6, validation: loss=0.1727, simple_loss=0.276, pruned_loss=0.03476, over 944034.00 frames. 2023-03-08 20:34:34,667 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-08 20:34:44,367 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=5.42 vs. limit=5.0 2023-03-08 20:34:45,200 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21179.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:35:33,721 INFO [train.py:898] (3/4) Epoch 6, batch 3050, loss[loss=0.2113, simple_loss=0.2898, pruned_loss=0.06644, over 18386.00 frames. ], tot_loss[loss=0.2212, simple_loss=0.2985, pruned_loss=0.07194, over 3585817.99 frames. ], batch size: 50, lr: 1.81e-02, grad_scale: 4.0 2023-03-08 20:35:45,057 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.742e+02 3.885e+02 4.643e+02 5.808e+02 1.137e+03, threshold=9.287e+02, percent-clipped=1.0 2023-03-08 20:36:02,223 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21244.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:36:32,475 INFO [train.py:898] (3/4) Epoch 6, batch 3100, loss[loss=0.221, simple_loss=0.304, pruned_loss=0.06902, over 18355.00 frames. ], tot_loss[loss=0.2204, simple_loss=0.2979, pruned_loss=0.07144, over 3594027.74 frames. ], batch size: 55, lr: 1.81e-02, grad_scale: 4.0 2023-03-08 20:36:58,419 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=21292.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:37:21,542 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21312.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:37:31,448 INFO [train.py:898] (3/4) Epoch 6, batch 3150, loss[loss=0.226, simple_loss=0.3058, pruned_loss=0.07309, over 18240.00 frames. ], tot_loss[loss=0.2208, simple_loss=0.2984, pruned_loss=0.07155, over 3586078.62 frames. ], batch size: 60, lr: 1.81e-02, grad_scale: 4.0 2023-03-08 20:37:39,574 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21328.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:37:41,465 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.840e+02 4.055e+02 4.775e+02 6.175e+02 1.308e+03, threshold=9.551e+02, percent-clipped=5.0 2023-03-08 20:37:55,841 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.93 vs. limit=2.0 2023-03-08 20:38:29,898 INFO [train.py:898] (3/4) Epoch 6, batch 3200, loss[loss=0.2359, simple_loss=0.3134, pruned_loss=0.07917, over 18364.00 frames. ], tot_loss[loss=0.2206, simple_loss=0.2982, pruned_loss=0.0715, over 3574096.02 frames. ], batch size: 55, lr: 1.81e-02, grad_scale: 8.0 2023-03-08 20:38:30,220 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9785, 4.9045, 4.9259, 4.8337, 4.7632, 4.8124, 5.2859, 5.0743], device='cuda:3'), covar=tensor([0.0056, 0.0082, 0.0073, 0.0100, 0.0073, 0.0098, 0.0099, 0.0123], device='cuda:3'), in_proj_covar=tensor([0.0067, 0.0049, 0.0049, 0.0064, 0.0054, 0.0073, 0.0060, 0.0061], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-08 20:38:32,528 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9283, 4.4260, 4.5964, 3.5823, 3.6837, 3.5863, 2.5025, 1.9974], device='cuda:3'), covar=tensor([0.0134, 0.0132, 0.0053, 0.0206, 0.0334, 0.0171, 0.0716, 0.0973], device='cuda:3'), in_proj_covar=tensor([0.0045, 0.0041, 0.0037, 0.0050, 0.0070, 0.0047, 0.0068, 0.0073], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0005, 0.0003, 0.0004, 0.0004], device='cuda:3') 2023-03-08 20:38:32,568 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21373.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:38:52,092 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21389.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:39:14,471 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=21408.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:39:16,839 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2384, 4.3756, 2.4219, 4.4416, 5.2228, 2.5179, 4.0565, 4.0314], device='cuda:3'), covar=tensor([0.0073, 0.0918, 0.1693, 0.0490, 0.0052, 0.1521, 0.0628, 0.0618], device='cuda:3'), in_proj_covar=tensor([0.0088, 0.0181, 0.0181, 0.0175, 0.0077, 0.0168, 0.0188, 0.0184], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-08 20:39:28,804 INFO [train.py:898] (3/4) Epoch 6, batch 3250, loss[loss=0.2198, simple_loss=0.2992, pruned_loss=0.07019, over 18304.00 frames. ], tot_loss[loss=0.2205, simple_loss=0.298, pruned_loss=0.0715, over 3575241.32 frames. ], batch size: 49, lr: 1.81e-02, grad_scale: 8.0 2023-03-08 20:39:39,004 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.706e+02 4.124e+02 5.141e+02 6.520e+02 1.245e+03, threshold=1.028e+03, percent-clipped=2.0 2023-03-08 20:40:28,085 INFO [train.py:898] (3/4) Epoch 6, batch 3300, loss[loss=0.2327, simple_loss=0.314, pruned_loss=0.07575, over 18504.00 frames. ], tot_loss[loss=0.2209, simple_loss=0.2984, pruned_loss=0.07172, over 3576988.05 frames. ], batch size: 53, lr: 1.80e-02, grad_scale: 8.0 2023-03-08 20:40:31,938 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=21474.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:41:27,331 INFO [train.py:898] (3/4) Epoch 6, batch 3350, loss[loss=0.2171, simple_loss=0.2933, pruned_loss=0.0705, over 18389.00 frames. ], tot_loss[loss=0.2194, simple_loss=0.2973, pruned_loss=0.0708, over 3584007.50 frames. ], batch size: 50, lr: 1.80e-02, grad_scale: 8.0 2023-03-08 20:41:37,405 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.459e+02 4.317e+02 5.194e+02 7.020e+02 1.247e+03, threshold=1.039e+03, percent-clipped=7.0 2023-03-08 20:42:06,002 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.3803, 3.5336, 5.0512, 4.1620, 3.3572, 2.8371, 4.3534, 5.1199], device='cuda:3'), covar=tensor([0.0838, 0.1413, 0.0053, 0.0314, 0.0675, 0.1062, 0.0276, 0.0129], device='cuda:3'), in_proj_covar=tensor([0.0128, 0.0189, 0.0072, 0.0141, 0.0159, 0.0162, 0.0142, 0.0098], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0001, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002], device='cuda:3') 2023-03-08 20:42:25,861 INFO [train.py:898] (3/4) Epoch 6, batch 3400, loss[loss=0.2537, simple_loss=0.3219, pruned_loss=0.09276, over 18448.00 frames. ], tot_loss[loss=0.2193, simple_loss=0.2973, pruned_loss=0.07069, over 3591024.91 frames. ], batch size: 59, lr: 1.80e-02, grad_scale: 4.0 2023-03-08 20:42:53,928 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21595.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:43:15,658 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21613.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:43:24,528 INFO [train.py:898] (3/4) Epoch 6, batch 3450, loss[loss=0.1907, simple_loss=0.271, pruned_loss=0.05521, over 18275.00 frames. ], tot_loss[loss=0.2197, simple_loss=0.2977, pruned_loss=0.07081, over 3600354.16 frames. ], batch size: 45, lr: 1.80e-02, grad_scale: 4.0 2023-03-08 20:43:35,796 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.934e+02 4.048e+02 5.156e+02 6.271e+02 2.369e+03, threshold=1.031e+03, percent-clipped=5.0 2023-03-08 20:44:05,767 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21656.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:44:20,618 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=21668.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:44:23,903 INFO [train.py:898] (3/4) Epoch 6, batch 3500, loss[loss=0.2276, simple_loss=0.3122, pruned_loss=0.07148, over 18105.00 frames. ], tot_loss[loss=0.2188, simple_loss=0.2967, pruned_loss=0.07042, over 3596661.85 frames. ], batch size: 62, lr: 1.80e-02, grad_scale: 4.0 2023-03-08 20:44:27,530 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21674.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:44:38,480 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=21684.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:45:04,898 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21708.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:45:18,378 INFO [train.py:898] (3/4) Epoch 6, batch 3550, loss[loss=0.1903, simple_loss=0.2721, pruned_loss=0.05421, over 18380.00 frames. ], tot_loss[loss=0.2189, simple_loss=0.2966, pruned_loss=0.07055, over 3600045.73 frames. ], batch size: 50, lr: 1.79e-02, grad_scale: 4.0 2023-03-08 20:45:21,627 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2389, 5.1824, 4.7366, 5.2615, 5.1933, 4.5382, 5.1193, 4.7544], device='cuda:3'), covar=tensor([0.0399, 0.0449, 0.1571, 0.0605, 0.0514, 0.0456, 0.0393, 0.1004], device='cuda:3'), in_proj_covar=tensor([0.0336, 0.0386, 0.0532, 0.0303, 0.0283, 0.0357, 0.0374, 0.0477], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-08 20:45:28,808 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.680e+02 4.052e+02 4.672e+02 6.032e+02 1.745e+03, threshold=9.344e+02, percent-clipped=2.0 2023-03-08 20:45:29,919 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6456, 1.7009, 3.0131, 2.8904, 3.6857, 5.2736, 4.6529, 4.5524], device='cuda:3'), covar=tensor([0.0619, 0.1393, 0.1271, 0.0839, 0.1180, 0.0041, 0.0249, 0.0185], device='cuda:3'), in_proj_covar=tensor([0.0178, 0.0230, 0.0220, 0.0211, 0.0309, 0.0133, 0.0202, 0.0160], device='cuda:3'), out_proj_covar=tensor([1.2289e-04, 1.6007e-04, 1.6069e-04, 1.3594e-04, 2.1592e-04, 8.8997e-05, 1.3697e-04, 1.1082e-04], device='cuda:3') 2023-03-08 20:45:39,921 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5502, 3.2380, 4.0223, 3.0173, 3.5447, 2.5782, 2.5004, 2.5036], device='cuda:3'), covar=tensor([0.0630, 0.0560, 0.0077, 0.0343, 0.0549, 0.1658, 0.1767, 0.1039], device='cuda:3'), in_proj_covar=tensor([0.0168, 0.0183, 0.0090, 0.0142, 0.0197, 0.0225, 0.0220, 0.0184], device='cuda:3'), out_proj_covar=tensor([1.5654e-04, 1.7560e-04, 8.8477e-05, 1.3498e-04, 1.8851e-04, 2.1614e-04, 2.1518e-04, 1.7796e-04], device='cuda:3') 2023-03-08 20:45:56,748 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=21756.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:46:04,535 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21763.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:46:12,829 INFO [train.py:898] (3/4) Epoch 6, batch 3600, loss[loss=0.1685, simple_loss=0.2479, pruned_loss=0.04459, over 18232.00 frames. ], tot_loss[loss=0.2192, simple_loss=0.2967, pruned_loss=0.07079, over 3592022.65 frames. ], batch size: 45, lr: 1.79e-02, grad_scale: 8.0 2023-03-08 20:46:16,565 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21774.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:47:18,368 INFO [train.py:898] (3/4) Epoch 7, batch 0, loss[loss=0.2197, simple_loss=0.2971, pruned_loss=0.07117, over 17060.00 frames. ], tot_loss[loss=0.2197, simple_loss=0.2971, pruned_loss=0.07117, over 17060.00 frames. ], batch size: 78, lr: 1.68e-02, grad_scale: 8.0 2023-03-08 20:47:18,368 INFO [train.py:923] (3/4) Computing validation loss 2023-03-08 20:47:30,265 INFO [train.py:932] (3/4) Epoch 7, validation: loss=0.175, simple_loss=0.2779, pruned_loss=0.0361, over 944034.00 frames. 2023-03-08 20:47:30,266 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-08 20:47:44,535 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.89 vs. limit=2.0 2023-03-08 20:47:50,713 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=21822.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:47:53,867 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21824.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:48:01,604 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.548e+02 4.151e+02 4.797e+02 6.024e+02 1.150e+03, threshold=9.595e+02, percent-clipped=4.0 2023-03-08 20:48:23,611 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21850.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:48:28,870 INFO [train.py:898] (3/4) Epoch 7, batch 50, loss[loss=0.2029, simple_loss=0.2791, pruned_loss=0.06333, over 18500.00 frames. ], tot_loss[loss=0.2158, simple_loss=0.2938, pruned_loss=0.06893, over 815896.48 frames. ], batch size: 47, lr: 1.68e-02, grad_scale: 8.0 2023-03-08 20:49:11,414 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2564, 4.3292, 2.4244, 4.3883, 5.2485, 2.7342, 4.0002, 3.7467], device='cuda:3'), covar=tensor([0.0077, 0.0972, 0.1579, 0.0498, 0.0042, 0.1272, 0.0643, 0.0853], device='cuda:3'), in_proj_covar=tensor([0.0087, 0.0185, 0.0180, 0.0177, 0.0076, 0.0166, 0.0189, 0.0185], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-08 20:49:26,682 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.6217, 5.0743, 5.7120, 5.5116, 5.4226, 6.2479, 5.8861, 5.5689], device='cuda:3'), covar=tensor([0.0803, 0.0652, 0.0612, 0.0560, 0.1428, 0.0574, 0.0571, 0.1749], device='cuda:3'), in_proj_covar=tensor([0.0265, 0.0203, 0.0207, 0.0203, 0.0248, 0.0289, 0.0196, 0.0291], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 20:49:27,608 INFO [train.py:898] (3/4) Epoch 7, batch 100, loss[loss=0.1775, simple_loss=0.2554, pruned_loss=0.04982, over 18371.00 frames. ], tot_loss[loss=0.2165, simple_loss=0.2952, pruned_loss=0.06888, over 1434937.15 frames. ], batch size: 42, lr: 1.67e-02, grad_scale: 8.0 2023-03-08 20:49:35,024 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21911.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:49:58,913 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.330e+02 3.869e+02 4.841e+02 5.741e+02 1.343e+03, threshold=9.682e+02, percent-clipped=1.0 2023-03-08 20:50:02,297 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5376, 3.5565, 5.0871, 4.0907, 3.2150, 2.7385, 4.1488, 5.2745], device='cuda:3'), covar=tensor([0.0802, 0.1431, 0.0060, 0.0329, 0.0784, 0.1144, 0.0349, 0.0063], device='cuda:3'), in_proj_covar=tensor([0.0131, 0.0199, 0.0074, 0.0143, 0.0163, 0.0164, 0.0147, 0.0101], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002], device='cuda:3') 2023-03-08 20:50:14,730 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4338, 1.8925, 2.6795, 2.8136, 3.4161, 5.0164, 4.3683, 4.0756], device='cuda:3'), covar=tensor([0.0677, 0.1335, 0.1418, 0.0823, 0.1234, 0.0040, 0.0257, 0.0227], device='cuda:3'), in_proj_covar=tensor([0.0178, 0.0230, 0.0220, 0.0212, 0.0310, 0.0134, 0.0202, 0.0160], device='cuda:3'), out_proj_covar=tensor([1.2273e-04, 1.5905e-04, 1.6044e-04, 1.3594e-04, 2.1641e-04, 8.8863e-05, 1.3642e-04, 1.1065e-04], device='cuda:3') 2023-03-08 20:50:22,292 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=21951.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:50:26,552 INFO [train.py:898] (3/4) Epoch 7, batch 150, loss[loss=0.2182, simple_loss=0.3067, pruned_loss=0.06488, over 18379.00 frames. ], tot_loss[loss=0.2169, simple_loss=0.2949, pruned_loss=0.06947, over 1903454.40 frames. ], batch size: 55, lr: 1.67e-02, grad_scale: 8.0 2023-03-08 20:50:41,202 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21968.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:50:42,118 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=21969.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:50:43,539 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21970.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:51:01,793 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21984.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:51:30,388 INFO [train.py:898] (3/4) Epoch 7, batch 200, loss[loss=0.2083, simple_loss=0.2753, pruned_loss=0.07063, over 18252.00 frames. ], tot_loss[loss=0.2164, simple_loss=0.2946, pruned_loss=0.0691, over 2272164.24 frames. ], batch size: 45, lr: 1.67e-02, grad_scale: 8.0 2023-03-08 20:51:43,119 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=22016.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:51:59,832 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.533e+02 3.799e+02 4.724e+02 5.611e+02 1.174e+03, threshold=9.448e+02, percent-clipped=1.0 2023-03-08 20:52:00,852 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22031.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:52:01,876 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=22032.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:52:11,336 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4770, 5.4484, 4.7574, 5.4052, 5.4628, 4.8937, 5.4283, 5.0236], device='cuda:3'), covar=tensor([0.0477, 0.0432, 0.2002, 0.0863, 0.0463, 0.0430, 0.0427, 0.0926], device='cuda:3'), in_proj_covar=tensor([0.0331, 0.0383, 0.0527, 0.0303, 0.0279, 0.0355, 0.0369, 0.0473], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-08 20:52:28,983 INFO [train.py:898] (3/4) Epoch 7, batch 250, loss[loss=0.1958, simple_loss=0.2652, pruned_loss=0.06321, over 18455.00 frames. ], tot_loss[loss=0.2158, simple_loss=0.2942, pruned_loss=0.06867, over 2575174.85 frames. ], batch size: 43, lr: 1.67e-02, grad_scale: 8.0 2023-03-08 20:53:20,843 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-08 20:53:28,160 INFO [train.py:898] (3/4) Epoch 7, batch 300, loss[loss=0.2249, simple_loss=0.3061, pruned_loss=0.07186, over 16972.00 frames. ], tot_loss[loss=0.2155, simple_loss=0.2938, pruned_loss=0.06857, over 2807000.12 frames. ], batch size: 78, lr: 1.67e-02, grad_scale: 8.0 2023-03-08 20:53:44,072 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22119.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:53:56,243 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22130.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:53:56,975 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.650e+02 3.954e+02 4.582e+02 6.003e+02 1.434e+03, threshold=9.164e+02, percent-clipped=5.0 2023-03-08 20:54:26,850 INFO [train.py:898] (3/4) Epoch 7, batch 350, loss[loss=0.1893, simple_loss=0.2622, pruned_loss=0.05824, over 16864.00 frames. ], tot_loss[loss=0.2163, simple_loss=0.2947, pruned_loss=0.0689, over 2982060.55 frames. ], batch size: 37, lr: 1.67e-02, grad_scale: 8.0 2023-03-08 20:54:36,523 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.70 vs. limit=5.0 2023-03-08 20:54:50,822 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22176.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:55:08,693 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22191.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:55:25,946 INFO [train.py:898] (3/4) Epoch 7, batch 400, loss[loss=0.1938, simple_loss=0.2652, pruned_loss=0.06121, over 18447.00 frames. ], tot_loss[loss=0.2147, simple_loss=0.2927, pruned_loss=0.06834, over 3106181.53 frames. ], batch size: 42, lr: 1.66e-02, grad_scale: 8.0 2023-03-08 20:55:27,328 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22206.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:55:37,485 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22215.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:55:55,641 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.341e+02 3.724e+02 4.646e+02 6.354e+02 1.977e+03, threshold=9.292e+02, percent-clipped=7.0 2023-03-08 20:56:02,925 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22237.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:56:02,943 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22237.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:56:13,334 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5408, 3.2791, 2.2383, 4.4209, 2.9016, 4.5804, 2.1314, 3.9225], device='cuda:3'), covar=tensor([0.0500, 0.0846, 0.1304, 0.0348, 0.0916, 0.0236, 0.1232, 0.0355], device='cuda:3'), in_proj_covar=tensor([0.0164, 0.0196, 0.0169, 0.0189, 0.0168, 0.0168, 0.0177, 0.0166], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002, 0.0002], device='cuda:3') 2023-03-08 20:56:20,907 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22251.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:56:25,002 INFO [train.py:898] (3/4) Epoch 7, batch 450, loss[loss=0.2584, simple_loss=0.3265, pruned_loss=0.09514, over 17091.00 frames. ], tot_loss[loss=0.2153, simple_loss=0.2934, pruned_loss=0.06855, over 3211626.65 frames. ], batch size: 78, lr: 1.66e-02, grad_scale: 8.0 2023-03-08 20:56:41,343 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22269.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:56:49,440 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22276.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:56:51,589 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2507, 5.2356, 4.7244, 5.2595, 5.1747, 4.5479, 5.0897, 4.7794], device='cuda:3'), covar=tensor([0.0400, 0.0399, 0.1398, 0.0587, 0.0524, 0.0472, 0.0431, 0.0966], device='cuda:3'), in_proj_covar=tensor([0.0330, 0.0375, 0.0522, 0.0301, 0.0275, 0.0353, 0.0370, 0.0470], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-08 20:57:15,016 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22298.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:57:16,448 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=22299.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:57:23,806 INFO [train.py:898] (3/4) Epoch 7, batch 500, loss[loss=0.2276, simple_loss=0.3088, pruned_loss=0.07317, over 18569.00 frames. ], tot_loss[loss=0.2163, simple_loss=0.2947, pruned_loss=0.06895, over 3299459.36 frames. ], batch size: 54, lr: 1.66e-02, grad_scale: 8.0 2023-03-08 20:57:36,524 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.2386, 5.4101, 2.6689, 5.1455, 5.0668, 5.3929, 5.2286, 2.5317], device='cuda:3'), covar=tensor([0.0140, 0.0037, 0.0803, 0.0070, 0.0068, 0.0054, 0.0085, 0.1058], device='cuda:3'), in_proj_covar=tensor([0.0066, 0.0055, 0.0084, 0.0069, 0.0064, 0.0055, 0.0069, 0.0088], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0004, 0.0004, 0.0003, 0.0003, 0.0004, 0.0005], device='cuda:3') 2023-03-08 20:57:37,446 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=22317.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:57:47,841 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22326.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:57:53,235 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.810e+02 3.764e+02 4.516e+02 5.475e+02 8.626e+02, threshold=9.031e+02, percent-clipped=0.0 2023-03-08 20:58:23,009 INFO [train.py:898] (3/4) Epoch 7, batch 550, loss[loss=0.2003, simple_loss=0.2915, pruned_loss=0.05457, over 18348.00 frames. ], tot_loss[loss=0.216, simple_loss=0.2943, pruned_loss=0.06887, over 3345590.48 frames. ], batch size: 55, lr: 1.66e-02, grad_scale: 8.0 2023-03-08 20:58:43,960 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.82 vs. limit=2.0 2023-03-08 20:58:57,686 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22385.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:59:21,871 INFO [train.py:898] (3/4) Epoch 7, batch 600, loss[loss=0.2239, simple_loss=0.3, pruned_loss=0.07393, over 18369.00 frames. ], tot_loss[loss=0.2143, simple_loss=0.2927, pruned_loss=0.06794, over 3409572.37 frames. ], batch size: 50, lr: 1.66e-02, grad_scale: 8.0 2023-03-08 20:59:38,701 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22419.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 20:59:52,060 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.568e+02 3.797e+02 4.706e+02 5.910e+02 1.335e+03, threshold=9.411e+02, percent-clipped=3.0 2023-03-08 21:00:09,541 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22446.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:00:19,240 INFO [train.py:898] (3/4) Epoch 7, batch 650, loss[loss=0.1673, simple_loss=0.2503, pruned_loss=0.04221, over 18165.00 frames. ], tot_loss[loss=0.2138, simple_loss=0.2925, pruned_loss=0.06754, over 3462810.83 frames. ], batch size: 44, lr: 1.65e-02, grad_scale: 8.0 2023-03-08 21:00:35,198 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=22467.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:00:57,044 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22486.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:01:18,249 INFO [train.py:898] (3/4) Epoch 7, batch 700, loss[loss=0.2245, simple_loss=0.3083, pruned_loss=0.07035, over 18616.00 frames. ], tot_loss[loss=0.2145, simple_loss=0.2936, pruned_loss=0.06773, over 3487259.99 frames. ], batch size: 52, lr: 1.65e-02, grad_scale: 8.0 2023-03-08 21:01:19,485 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22506.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:01:37,781 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.94 vs. limit=5.0 2023-03-08 21:01:49,200 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.755e+02 3.999e+02 4.789e+02 5.664e+02 1.125e+03, threshold=9.578e+02, percent-clipped=2.0 2023-03-08 21:01:50,642 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22532.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:02:15,568 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=22554.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:02:16,512 INFO [train.py:898] (3/4) Epoch 7, batch 750, loss[loss=0.227, simple_loss=0.3082, pruned_loss=0.07285, over 18254.00 frames. ], tot_loss[loss=0.2136, simple_loss=0.2927, pruned_loss=0.06723, over 3518799.94 frames. ], batch size: 60, lr: 1.65e-02, grad_scale: 8.0 2023-03-08 21:02:37,339 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22571.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:02:38,602 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0705, 5.2711, 2.6797, 5.0129, 4.9766, 5.2591, 5.1131, 2.8109], device='cuda:3'), covar=tensor([0.0154, 0.0061, 0.0746, 0.0078, 0.0068, 0.0062, 0.0099, 0.0924], device='cuda:3'), in_proj_covar=tensor([0.0066, 0.0054, 0.0083, 0.0068, 0.0064, 0.0055, 0.0069, 0.0087], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0004, 0.0004, 0.0003, 0.0003, 0.0004, 0.0005], device='cuda:3') 2023-03-08 21:03:01,804 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22593.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:03:15,340 INFO [train.py:898] (3/4) Epoch 7, batch 800, loss[loss=0.183, simple_loss=0.2628, pruned_loss=0.05157, over 18505.00 frames. ], tot_loss[loss=0.2134, simple_loss=0.2923, pruned_loss=0.06722, over 3532421.09 frames. ], batch size: 47, lr: 1.65e-02, grad_scale: 8.0 2023-03-08 21:03:41,335 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4182, 5.0385, 5.0548, 5.0189, 4.6795, 4.9610, 4.0758, 4.8283], device='cuda:3'), covar=tensor([0.0283, 0.0366, 0.0236, 0.0250, 0.0328, 0.0242, 0.1549, 0.0343], device='cuda:3'), in_proj_covar=tensor([0.0141, 0.0186, 0.0171, 0.0180, 0.0176, 0.0186, 0.0258, 0.0168], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0006, 0.0005], device='cuda:3') 2023-03-08 21:03:41,367 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22626.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:03:46,426 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.314e+02 4.023e+02 4.944e+02 6.162e+02 1.524e+03, threshold=9.887e+02, percent-clipped=2.0 2023-03-08 21:03:55,299 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22639.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:04:13,321 INFO [train.py:898] (3/4) Epoch 7, batch 850, loss[loss=0.1968, simple_loss=0.2705, pruned_loss=0.06149, over 18424.00 frames. ], tot_loss[loss=0.2128, simple_loss=0.2917, pruned_loss=0.06695, over 3552938.29 frames. ], batch size: 43, lr: 1.65e-02, grad_scale: 8.0 2023-03-08 21:04:28,058 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6398, 2.9724, 4.2474, 4.0599, 2.5552, 4.4036, 3.8182, 2.5446], device='cuda:3'), covar=tensor([0.0354, 0.1093, 0.0190, 0.0174, 0.1421, 0.0131, 0.0316, 0.1044], device='cuda:3'), in_proj_covar=tensor([0.0159, 0.0200, 0.0113, 0.0118, 0.0195, 0.0154, 0.0167, 0.0181], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 21:04:36,641 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=22674.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:05:07,178 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22700.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 21:05:12,412 INFO [train.py:898] (3/4) Epoch 7, batch 900, loss[loss=0.2329, simple_loss=0.3144, pruned_loss=0.07569, over 18456.00 frames. ], tot_loss[loss=0.2142, simple_loss=0.2935, pruned_loss=0.06746, over 3572281.96 frames. ], batch size: 59, lr: 1.65e-02, grad_scale: 8.0 2023-03-08 21:05:23,164 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9728, 4.1604, 5.1870, 3.1161, 4.1633, 2.7430, 3.0022, 2.3023], device='cuda:3'), covar=tensor([0.0742, 0.0537, 0.0060, 0.0561, 0.0664, 0.1849, 0.2009, 0.1504], device='cuda:3'), in_proj_covar=tensor([0.0177, 0.0191, 0.0089, 0.0147, 0.0203, 0.0230, 0.0232, 0.0192], device='cuda:3'), out_proj_covar=tensor([1.6336e-04, 1.8219e-04, 8.7129e-05, 1.3968e-04, 1.9335e-04, 2.1965e-04, 2.2313e-04, 1.8405e-04], device='cuda:3') 2023-03-08 21:05:44,110 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.332e+02 3.928e+02 4.679e+02 5.681e+02 1.388e+03, threshold=9.358e+02, percent-clipped=3.0 2023-03-08 21:05:45,586 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2427, 5.8729, 5.3916, 5.5352, 5.2751, 5.3711, 5.8750, 5.8295], device='cuda:3'), covar=tensor([0.1067, 0.0642, 0.0430, 0.0698, 0.1459, 0.0543, 0.0511, 0.0609], device='cuda:3'), in_proj_covar=tensor([0.0443, 0.0369, 0.0282, 0.0399, 0.0538, 0.0395, 0.0480, 0.0372], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-08 21:05:55,312 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22741.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:06:11,228 INFO [train.py:898] (3/4) Epoch 7, batch 950, loss[loss=0.204, simple_loss=0.2841, pruned_loss=0.06188, over 18517.00 frames. ], tot_loss[loss=0.2141, simple_loss=0.2937, pruned_loss=0.06727, over 3579468.79 frames. ], batch size: 47, lr: 1.64e-02, grad_scale: 8.0 2023-03-08 21:06:13,127 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.59 vs. limit=2.0 2023-03-08 21:06:49,081 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22786.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:06:58,155 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22794.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:07:10,655 INFO [train.py:898] (3/4) Epoch 7, batch 1000, loss[loss=0.2429, simple_loss=0.3209, pruned_loss=0.08249, over 16465.00 frames. ], tot_loss[loss=0.2144, simple_loss=0.2939, pruned_loss=0.06748, over 3582302.03 frames. ], batch size: 94, lr: 1.64e-02, grad_scale: 8.0 2023-03-08 21:07:33,426 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22825.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:07:37,558 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8414, 3.6476, 5.1382, 4.2106, 2.7464, 2.8661, 4.3001, 5.2394], device='cuda:3'), covar=tensor([0.0796, 0.1562, 0.0068, 0.0313, 0.1104, 0.1238, 0.0370, 0.0128], device='cuda:3'), in_proj_covar=tensor([0.0136, 0.0205, 0.0077, 0.0144, 0.0166, 0.0169, 0.0152, 0.0105], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002], device='cuda:3') 2023-03-08 21:07:41,078 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.326e+02 3.793e+02 5.006e+02 6.046e+02 1.509e+03, threshold=1.001e+03, percent-clipped=4.0 2023-03-08 21:07:42,986 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22832.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:07:45,197 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=22834.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:08:09,926 INFO [train.py:898] (3/4) Epoch 7, batch 1050, loss[loss=0.195, simple_loss=0.2794, pruned_loss=0.05533, over 18378.00 frames. ], tot_loss[loss=0.2134, simple_loss=0.2931, pruned_loss=0.06686, over 3592564.65 frames. ], batch size: 50, lr: 1.64e-02, grad_scale: 8.0 2023-03-08 21:08:10,359 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22855.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:08:24,197 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4920, 3.6407, 5.2886, 4.3310, 3.3765, 3.0666, 4.4792, 5.3830], device='cuda:3'), covar=tensor([0.0992, 0.1672, 0.0056, 0.0321, 0.0816, 0.1123, 0.0322, 0.0075], device='cuda:3'), in_proj_covar=tensor([0.0133, 0.0202, 0.0076, 0.0143, 0.0162, 0.0167, 0.0150, 0.0104], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002], device='cuda:3') 2023-03-08 21:08:28,624 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22871.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:08:38,425 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=22880.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:08:47,280 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22886.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:08:55,775 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22893.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:09:09,173 INFO [train.py:898] (3/4) Epoch 7, batch 1100, loss[loss=0.2871, simple_loss=0.3368, pruned_loss=0.1187, over 12154.00 frames. ], tot_loss[loss=0.2145, simple_loss=0.2942, pruned_loss=0.06741, over 3590337.86 frames. ], batch size: 129, lr: 1.64e-02, grad_scale: 8.0 2023-03-08 21:09:25,081 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=22919.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:09:38,084 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.302e+02 3.634e+02 4.460e+02 5.357e+02 1.645e+03, threshold=8.921e+02, percent-clipped=3.0 2023-03-08 21:09:51,752 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=22941.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:09:57,836 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1902, 3.5390, 2.5382, 3.4517, 4.1634, 2.4700, 3.4438, 3.4768], device='cuda:3'), covar=tensor([0.0076, 0.0749, 0.1150, 0.0472, 0.0062, 0.1099, 0.0509, 0.0573], device='cuda:3'), in_proj_covar=tensor([0.0090, 0.0194, 0.0178, 0.0178, 0.0077, 0.0167, 0.0191, 0.0188], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-08 21:10:07,847 INFO [train.py:898] (3/4) Epoch 7, batch 1150, loss[loss=0.1958, simple_loss=0.2733, pruned_loss=0.05912, over 18367.00 frames. ], tot_loss[loss=0.2148, simple_loss=0.2939, pruned_loss=0.0679, over 3589405.55 frames. ], batch size: 46, lr: 1.64e-02, grad_scale: 8.0 2023-03-08 21:10:55,072 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22995.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 21:10:56,103 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4518, 5.1649, 5.5858, 5.4943, 5.3790, 6.1631, 5.7989, 5.6602], device='cuda:3'), covar=tensor([0.0850, 0.0684, 0.0579, 0.0552, 0.1387, 0.0666, 0.0562, 0.1489], device='cuda:3'), in_proj_covar=tensor([0.0271, 0.0202, 0.0206, 0.0203, 0.0244, 0.0295, 0.0196, 0.0291], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 21:11:06,252 INFO [train.py:898] (3/4) Epoch 7, batch 1200, loss[loss=0.2263, simple_loss=0.3078, pruned_loss=0.07242, over 18105.00 frames. ], tot_loss[loss=0.2159, simple_loss=0.2946, pruned_loss=0.06858, over 3585218.69 frames. ], batch size: 62, lr: 1.64e-02, grad_scale: 8.0 2023-03-08 21:11:35,898 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.453e+02 4.180e+02 4.856e+02 6.104e+02 1.411e+03, threshold=9.713e+02, percent-clipped=4.0 2023-03-08 21:11:39,002 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.59 vs. limit=5.0 2023-03-08 21:11:49,392 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23041.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:12:05,442 INFO [train.py:898] (3/4) Epoch 7, batch 1250, loss[loss=0.2134, simple_loss=0.2877, pruned_loss=0.0695, over 18364.00 frames. ], tot_loss[loss=0.2147, simple_loss=0.2936, pruned_loss=0.06795, over 3590890.98 frames. ], batch size: 46, lr: 1.63e-02, grad_scale: 8.0 2023-03-08 21:12:44,840 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=23089.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:13:04,076 INFO [train.py:898] (3/4) Epoch 7, batch 1300, loss[loss=0.2167, simple_loss=0.3003, pruned_loss=0.06653, over 18636.00 frames. ], tot_loss[loss=0.2151, simple_loss=0.2936, pruned_loss=0.06825, over 3594235.75 frames. ], batch size: 52, lr: 1.63e-02, grad_scale: 8.0 2023-03-08 21:13:07,909 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=23108.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:13:26,379 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2743, 4.3885, 5.3202, 2.9256, 4.4799, 3.0976, 3.1158, 2.3074], device='cuda:3'), covar=tensor([0.0604, 0.0424, 0.0044, 0.0576, 0.0486, 0.1671, 0.1918, 0.1429], device='cuda:3'), in_proj_covar=tensor([0.0180, 0.0196, 0.0091, 0.0148, 0.0205, 0.0235, 0.0238, 0.0194], device='cuda:3'), out_proj_covar=tensor([1.6622e-04, 1.8585e-04, 8.8373e-05, 1.4046e-04, 1.9493e-04, 2.2404e-04, 2.2868e-04, 1.8645e-04], device='cuda:3') 2023-03-08 21:13:33,526 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.650e+02 3.767e+02 4.521e+02 6.073e+02 1.537e+03, threshold=9.042e+02, percent-clipped=6.0 2023-03-08 21:13:57,456 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=23150.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:14:03,025 INFO [train.py:898] (3/4) Epoch 7, batch 1350, loss[loss=0.2242, simple_loss=0.3101, pruned_loss=0.06911, over 18301.00 frames. ], tot_loss[loss=0.215, simple_loss=0.2939, pruned_loss=0.06807, over 3585168.84 frames. ], batch size: 57, lr: 1.63e-02, grad_scale: 8.0 2023-03-08 21:14:19,179 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=23169.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:14:32,498 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=23181.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:15:02,206 INFO [train.py:898] (3/4) Epoch 7, batch 1400, loss[loss=0.1984, simple_loss=0.2688, pruned_loss=0.064, over 17608.00 frames. ], tot_loss[loss=0.2139, simple_loss=0.2928, pruned_loss=0.06754, over 3596523.41 frames. ], batch size: 39, lr: 1.63e-02, grad_scale: 8.0 2023-03-08 21:15:05,906 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9590, 4.8745, 4.9177, 4.7792, 4.7561, 4.7544, 5.1550, 5.2028], device='cuda:3'), covar=tensor([0.0051, 0.0065, 0.0062, 0.0084, 0.0057, 0.0096, 0.0126, 0.0099], device='cuda:3'), in_proj_covar=tensor([0.0067, 0.0049, 0.0050, 0.0065, 0.0054, 0.0075, 0.0063, 0.0062], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0003, 0.0002, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-08 21:15:16,100 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=23217.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:15:27,381 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4516, 6.0481, 5.4705, 5.8096, 5.4598, 5.5879, 6.0732, 6.0136], device='cuda:3'), covar=tensor([0.0937, 0.0491, 0.0393, 0.0618, 0.1360, 0.0556, 0.0434, 0.0535], device='cuda:3'), in_proj_covar=tensor([0.0452, 0.0367, 0.0284, 0.0407, 0.0557, 0.0411, 0.0495, 0.0387], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-08 21:15:29,710 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=23229.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:15:31,616 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.272e+02 3.620e+02 4.462e+02 5.791e+02 9.661e+02, threshold=8.924e+02, percent-clipped=4.0 2023-03-08 21:15:41,497 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.59 vs. limit=2.0 2023-03-08 21:16:00,103 INFO [train.py:898] (3/4) Epoch 7, batch 1450, loss[loss=0.2214, simple_loss=0.2923, pruned_loss=0.07527, over 18258.00 frames. ], tot_loss[loss=0.2144, simple_loss=0.2935, pruned_loss=0.06762, over 3594223.49 frames. ], batch size: 47, lr: 1.63e-02, grad_scale: 8.0 2023-03-08 21:16:27,014 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=23278.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:16:40,583 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=23290.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:16:42,373 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-08 21:16:46,402 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23295.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:16:57,594 INFO [train.py:898] (3/4) Epoch 7, batch 1500, loss[loss=0.2134, simple_loss=0.2903, pruned_loss=0.06823, over 18395.00 frames. ], tot_loss[loss=0.2145, simple_loss=0.2932, pruned_loss=0.06788, over 3574698.76 frames. ], batch size: 48, lr: 1.63e-02, grad_scale: 8.0 2023-03-08 21:17:27,943 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.706e+02 4.019e+02 4.944e+02 6.041e+02 1.007e+03, threshold=9.887e+02, percent-clipped=2.0 2023-03-08 21:17:37,314 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4539, 1.7882, 2.8422, 2.9048, 3.5362, 5.1061, 4.3409, 4.1721], device='cuda:3'), covar=tensor([0.0751, 0.1423, 0.1465, 0.0804, 0.1210, 0.0046, 0.0300, 0.0270], device='cuda:3'), in_proj_covar=tensor([0.0191, 0.0244, 0.0237, 0.0220, 0.0322, 0.0141, 0.0214, 0.0171], device='cuda:3'), out_proj_covar=tensor([1.3011e-04, 1.6577e-04, 1.6923e-04, 1.3904e-04, 2.2018e-04, 9.2663e-05, 1.4080e-04, 1.1594e-04], device='cuda:3') 2023-03-08 21:17:41,550 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=23343.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:17:55,129 INFO [train.py:898] (3/4) Epoch 7, batch 1550, loss[loss=0.2067, simple_loss=0.2919, pruned_loss=0.06078, over 18377.00 frames. ], tot_loss[loss=0.2144, simple_loss=0.2933, pruned_loss=0.06773, over 3580335.74 frames. ], batch size: 50, lr: 1.62e-02, grad_scale: 8.0 2023-03-08 21:18:21,084 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0860, 4.9474, 5.0980, 4.9963, 5.0592, 5.6897, 5.3167, 5.1718], device='cuda:3'), covar=tensor([0.0771, 0.0629, 0.0608, 0.0557, 0.1154, 0.0696, 0.0563, 0.1262], device='cuda:3'), in_proj_covar=tensor([0.0273, 0.0202, 0.0212, 0.0209, 0.0245, 0.0303, 0.0200, 0.0293], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 21:18:53,702 INFO [train.py:898] (3/4) Epoch 7, batch 1600, loss[loss=0.2188, simple_loss=0.3068, pruned_loss=0.06536, over 18353.00 frames. ], tot_loss[loss=0.214, simple_loss=0.2933, pruned_loss=0.06735, over 3587103.64 frames. ], batch size: 56, lr: 1.62e-02, grad_scale: 8.0 2023-03-08 21:19:25,463 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 3.006e+02 3.819e+02 4.598e+02 5.727e+02 1.079e+03, threshold=9.196e+02, percent-clipped=2.0 2023-03-08 21:19:47,076 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23450.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:19:52,541 INFO [train.py:898] (3/4) Epoch 7, batch 1650, loss[loss=0.2088, simple_loss=0.2902, pruned_loss=0.06368, over 18394.00 frames. ], tot_loss[loss=0.2145, simple_loss=0.2936, pruned_loss=0.06773, over 3575887.49 frames. ], batch size: 52, lr: 1.62e-02, grad_scale: 8.0 2023-03-08 21:20:04,712 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=23464.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:20:24,344 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23481.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:20:32,477 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=23488.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:20:43,643 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=23498.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:20:51,423 INFO [train.py:898] (3/4) Epoch 7, batch 1700, loss[loss=0.2336, simple_loss=0.3166, pruned_loss=0.07534, over 16369.00 frames. ], tot_loss[loss=0.2139, simple_loss=0.2933, pruned_loss=0.06724, over 3582994.37 frames. ], batch size: 94, lr: 1.62e-02, grad_scale: 8.0 2023-03-08 21:21:21,207 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=23529.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:21:23,330 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.409e+02 3.809e+02 4.583e+02 5.657e+02 1.396e+03, threshold=9.165e+02, percent-clipped=6.0 2023-03-08 21:21:30,711 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7337, 2.5530, 4.5697, 4.2500, 2.3555, 4.5336, 4.0378, 2.8403], device='cuda:3'), covar=tensor([0.0325, 0.1375, 0.0089, 0.0165, 0.1587, 0.0151, 0.0335, 0.0961], device='cuda:3'), in_proj_covar=tensor([0.0163, 0.0206, 0.0116, 0.0123, 0.0200, 0.0159, 0.0170, 0.0181], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 21:21:41,500 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.44 vs. limit=2.0 2023-03-08 21:21:44,286 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=23549.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 21:21:50,738 INFO [train.py:898] (3/4) Epoch 7, batch 1750, loss[loss=0.1937, simple_loss=0.2742, pruned_loss=0.0566, over 18269.00 frames. ], tot_loss[loss=0.2147, simple_loss=0.2937, pruned_loss=0.0678, over 3582323.77 frames. ], batch size: 47, lr: 1.62e-02, grad_scale: 16.0 2023-03-08 21:22:13,852 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=23573.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:22:24,647 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.95 vs. limit=2.0 2023-03-08 21:22:27,622 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=23585.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:22:48,719 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7331, 1.9102, 2.9661, 2.8849, 3.6008, 5.2976, 4.5550, 4.4702], device='cuda:3'), covar=tensor([0.0690, 0.1484, 0.1479, 0.0925, 0.1260, 0.0039, 0.0298, 0.0252], device='cuda:3'), in_proj_covar=tensor([0.0191, 0.0245, 0.0240, 0.0222, 0.0325, 0.0140, 0.0213, 0.0171], device='cuda:3'), out_proj_covar=tensor([1.2942e-04, 1.6599e-04, 1.7062e-04, 1.3948e-04, 2.2122e-04, 9.1779e-05, 1.3940e-04, 1.1525e-04], device='cuda:3') 2023-03-08 21:22:50,502 INFO [train.py:898] (3/4) Epoch 7, batch 1800, loss[loss=0.2459, simple_loss=0.3118, pruned_loss=0.08997, over 12642.00 frames. ], tot_loss[loss=0.215, simple_loss=0.294, pruned_loss=0.06798, over 3584706.28 frames. ], batch size: 129, lr: 1.62e-02, grad_scale: 16.0 2023-03-08 21:23:18,312 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.2904, 3.5855, 5.0873, 4.3395, 2.8574, 2.6848, 4.2934, 5.1051], device='cuda:3'), covar=tensor([0.0955, 0.1401, 0.0070, 0.0249, 0.1026, 0.1156, 0.0338, 0.0138], device='cuda:3'), in_proj_covar=tensor([0.0135, 0.0209, 0.0079, 0.0150, 0.0168, 0.0170, 0.0156, 0.0109], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-08 21:23:21,010 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.500e+02 3.624e+02 4.625e+02 5.651e+02 1.017e+03, threshold=9.251e+02, percent-clipped=3.0 2023-03-08 21:23:47,600 INFO [train.py:898] (3/4) Epoch 7, batch 1850, loss[loss=0.232, simple_loss=0.3111, pruned_loss=0.07644, over 16204.00 frames. ], tot_loss[loss=0.2162, simple_loss=0.2948, pruned_loss=0.06881, over 3583811.63 frames. ], batch size: 94, lr: 1.61e-02, grad_scale: 16.0 2023-03-08 21:23:59,969 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.76 vs. limit=2.0 2023-03-08 21:24:45,517 INFO [train.py:898] (3/4) Epoch 7, batch 1900, loss[loss=0.2201, simple_loss=0.3136, pruned_loss=0.06336, over 18466.00 frames. ], tot_loss[loss=0.2165, simple_loss=0.2954, pruned_loss=0.06878, over 3573057.22 frames. ], batch size: 59, lr: 1.61e-02, grad_scale: 16.0 2023-03-08 21:24:52,568 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3683, 4.3294, 2.6021, 4.2973, 5.3836, 2.5651, 4.0374, 3.9079], device='cuda:3'), covar=tensor([0.0057, 0.0864, 0.1411, 0.0507, 0.0041, 0.1229, 0.0555, 0.0682], device='cuda:3'), in_proj_covar=tensor([0.0091, 0.0193, 0.0180, 0.0179, 0.0078, 0.0167, 0.0193, 0.0188], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-08 21:25:17,160 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.556e+02 4.080e+02 4.948e+02 6.191e+02 1.850e+03, threshold=9.895e+02, percent-clipped=8.0 2023-03-08 21:25:28,632 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7436, 3.0668, 4.5402, 4.1401, 2.7555, 4.6384, 3.9852, 3.0779], device='cuda:3'), covar=tensor([0.0360, 0.0966, 0.0076, 0.0185, 0.1239, 0.0129, 0.0313, 0.0800], device='cuda:3'), in_proj_covar=tensor([0.0162, 0.0202, 0.0118, 0.0123, 0.0199, 0.0161, 0.0170, 0.0181], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 21:25:43,142 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3371, 2.5138, 2.3067, 2.7242, 3.2989, 3.2630, 2.8139, 2.7860], device='cuda:3'), covar=tensor([0.0277, 0.0282, 0.0778, 0.0322, 0.0185, 0.0163, 0.0374, 0.0300], device='cuda:3'), in_proj_covar=tensor([0.0107, 0.0085, 0.0142, 0.0116, 0.0082, 0.0070, 0.0111, 0.0108], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 21:25:43,792 INFO [train.py:898] (3/4) Epoch 7, batch 1950, loss[loss=0.2628, simple_loss=0.3315, pruned_loss=0.09704, over 18456.00 frames. ], tot_loss[loss=0.2164, simple_loss=0.2951, pruned_loss=0.06878, over 3566361.84 frames. ], batch size: 59, lr: 1.61e-02, grad_scale: 16.0 2023-03-08 21:25:54,320 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23764.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:26:15,527 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9109, 3.7098, 3.4375, 3.1299, 3.4430, 2.8948, 2.5909, 3.8013], device='cuda:3'), covar=tensor([0.0026, 0.0069, 0.0072, 0.0094, 0.0066, 0.0139, 0.0171, 0.0058], device='cuda:3'), in_proj_covar=tensor([0.0065, 0.0088, 0.0079, 0.0122, 0.0080, 0.0124, 0.0131, 0.0070], device='cuda:3'), out_proj_covar=tensor([8.9798e-05, 1.3373e-04, 1.1834e-04, 1.9237e-04, 1.1941e-04, 1.9221e-04, 2.0208e-04, 1.0170e-04], device='cuda:3') 2023-03-08 21:26:21,570 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7050, 3.3037, 1.9552, 4.4225, 3.0086, 4.6462, 2.3070, 3.8663], device='cuda:3'), covar=tensor([0.0460, 0.0869, 0.1499, 0.0374, 0.0806, 0.0196, 0.1128, 0.0383], device='cuda:3'), in_proj_covar=tensor([0.0169, 0.0199, 0.0169, 0.0197, 0.0170, 0.0178, 0.0179, 0.0168], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002, 0.0002], device='cuda:3') 2023-03-08 21:26:42,012 INFO [train.py:898] (3/4) Epoch 7, batch 2000, loss[loss=0.2262, simple_loss=0.3105, pruned_loss=0.07096, over 18134.00 frames. ], tot_loss[loss=0.2165, simple_loss=0.295, pruned_loss=0.06902, over 3557106.12 frames. ], batch size: 62, lr: 1.61e-02, grad_scale: 8.0 2023-03-08 21:26:50,349 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=23812.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:27:13,553 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.566e+02 3.909e+02 4.831e+02 5.895e+02 1.179e+03, threshold=9.662e+02, percent-clipped=2.0 2023-03-08 21:27:29,152 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=23844.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 21:27:40,876 INFO [train.py:898] (3/4) Epoch 7, batch 2050, loss[loss=0.2148, simple_loss=0.287, pruned_loss=0.07134, over 18550.00 frames. ], tot_loss[loss=0.2155, simple_loss=0.2945, pruned_loss=0.0683, over 3567702.05 frames. ], batch size: 45, lr: 1.61e-02, grad_scale: 8.0 2023-03-08 21:27:49,171 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6346, 3.4789, 3.2717, 2.9849, 3.1047, 2.8438, 2.6288, 3.7201], device='cuda:3'), covar=tensor([0.0034, 0.0073, 0.0077, 0.0125, 0.0112, 0.0147, 0.0172, 0.0062], device='cuda:3'), in_proj_covar=tensor([0.0065, 0.0088, 0.0079, 0.0124, 0.0081, 0.0123, 0.0131, 0.0070], device='cuda:3'), out_proj_covar=tensor([9.0106e-05, 1.3411e-04, 1.1870e-04, 1.9455e-04, 1.1972e-04, 1.9070e-04, 2.0143e-04, 1.0193e-04], device='cuda:3') 2023-03-08 21:28:01,646 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23873.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:28:16,354 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23885.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:28:38,826 INFO [train.py:898] (3/4) Epoch 7, batch 2100, loss[loss=0.1809, simple_loss=0.256, pruned_loss=0.05286, over 18491.00 frames. ], tot_loss[loss=0.2147, simple_loss=0.294, pruned_loss=0.06772, over 3562108.37 frames. ], batch size: 43, lr: 1.61e-02, grad_scale: 8.0 2023-03-08 21:28:57,399 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=23921.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:29:09,535 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.380e+02 3.638e+02 4.339e+02 5.858e+02 1.130e+03, threshold=8.677e+02, percent-clipped=1.0 2023-03-08 21:29:10,878 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=23933.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:29:37,936 INFO [train.py:898] (3/4) Epoch 7, batch 2150, loss[loss=0.2109, simple_loss=0.2963, pruned_loss=0.06275, over 18273.00 frames. ], tot_loss[loss=0.2147, simple_loss=0.2941, pruned_loss=0.06766, over 3575624.04 frames. ], batch size: 54, lr: 1.60e-02, grad_scale: 8.0 2023-03-08 21:30:07,617 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0924, 4.2550, 2.4227, 4.1559, 5.1885, 2.1398, 3.7197, 3.8192], device='cuda:3'), covar=tensor([0.0063, 0.0906, 0.1472, 0.0542, 0.0037, 0.1521, 0.0659, 0.0682], device='cuda:3'), in_proj_covar=tensor([0.0093, 0.0193, 0.0179, 0.0178, 0.0077, 0.0167, 0.0193, 0.0186], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-08 21:30:41,408 INFO [train.py:898] (3/4) Epoch 7, batch 2200, loss[loss=0.249, simple_loss=0.314, pruned_loss=0.09198, over 12458.00 frames. ], tot_loss[loss=0.2148, simple_loss=0.2942, pruned_loss=0.06768, over 3580041.87 frames. ], batch size: 129, lr: 1.60e-02, grad_scale: 8.0 2023-03-08 21:31:04,160 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9415, 4.8784, 4.5048, 4.8456, 4.8838, 4.3022, 4.8095, 4.5221], device='cuda:3'), covar=tensor([0.0379, 0.0449, 0.1395, 0.0733, 0.0473, 0.0411, 0.0393, 0.0840], device='cuda:3'), in_proj_covar=tensor([0.0351, 0.0389, 0.0550, 0.0316, 0.0290, 0.0367, 0.0390, 0.0493], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-08 21:31:11,878 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.180e+02 3.947e+02 4.705e+02 5.545e+02 1.194e+03, threshold=9.409e+02, percent-clipped=3.0 2023-03-08 21:31:17,799 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=24037.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:31:40,402 INFO [train.py:898] (3/4) Epoch 7, batch 2250, loss[loss=0.2203, simple_loss=0.3018, pruned_loss=0.06946, over 18354.00 frames. ], tot_loss[loss=0.2152, simple_loss=0.2947, pruned_loss=0.06783, over 3577639.70 frames. ], batch size: 56, lr: 1.60e-02, grad_scale: 8.0 2023-03-08 21:32:30,481 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=24098.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:32:38,522 INFO [train.py:898] (3/4) Epoch 7, batch 2300, loss[loss=0.2038, simple_loss=0.2776, pruned_loss=0.06505, over 18382.00 frames. ], tot_loss[loss=0.2156, simple_loss=0.2954, pruned_loss=0.06793, over 3582838.76 frames. ], batch size: 42, lr: 1.60e-02, grad_scale: 8.0 2023-03-08 21:32:58,128 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4071, 4.1129, 4.1381, 3.1323, 3.4108, 3.2697, 2.4985, 1.8439], device='cuda:3'), covar=tensor([0.0238, 0.0150, 0.0069, 0.0258, 0.0367, 0.0244, 0.0743, 0.1018], device='cuda:3'), in_proj_covar=tensor([0.0049, 0.0044, 0.0040, 0.0051, 0.0073, 0.0049, 0.0069, 0.0075], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0005, 0.0003, 0.0004, 0.0005], device='cuda:3') 2023-03-08 21:33:08,296 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4571, 3.6097, 5.2054, 4.1282, 3.0486, 2.8897, 4.5362, 5.2565], device='cuda:3'), covar=tensor([0.0900, 0.1514, 0.0064, 0.0349, 0.0940, 0.1095, 0.0301, 0.0156], device='cuda:3'), in_proj_covar=tensor([0.0134, 0.0209, 0.0079, 0.0148, 0.0165, 0.0168, 0.0155, 0.0107], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-08 21:33:08,946 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.576e+02 4.201e+02 5.158e+02 6.206e+02 1.861e+03, threshold=1.032e+03, percent-clipped=10.0 2023-03-08 21:33:13,598 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5306, 3.7515, 2.4042, 3.7330, 4.5299, 2.5388, 3.4988, 3.5305], device='cuda:3'), covar=tensor([0.0087, 0.0858, 0.1400, 0.0560, 0.0059, 0.1110, 0.0646, 0.0744], device='cuda:3'), in_proj_covar=tensor([0.0089, 0.0188, 0.0176, 0.0176, 0.0076, 0.0163, 0.0188, 0.0181], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-08 21:33:23,100 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=24144.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 21:33:36,519 INFO [train.py:898] (3/4) Epoch 7, batch 2350, loss[loss=0.1921, simple_loss=0.2737, pruned_loss=0.05527, over 18269.00 frames. ], tot_loss[loss=0.2151, simple_loss=0.2948, pruned_loss=0.06771, over 3590230.09 frames. ], batch size: 47, lr: 1.60e-02, grad_scale: 8.0 2023-03-08 21:34:19,527 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=24192.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:34:36,080 INFO [train.py:898] (3/4) Epoch 7, batch 2400, loss[loss=0.2037, simple_loss=0.2833, pruned_loss=0.06203, over 18210.00 frames. ], tot_loss[loss=0.2139, simple_loss=0.2941, pruned_loss=0.06687, over 3593609.80 frames. ], batch size: 60, lr: 1.60e-02, grad_scale: 8.0 2023-03-08 21:35:09,122 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.669e+02 3.723e+02 4.227e+02 5.514e+02 9.877e+02, threshold=8.454e+02, percent-clipped=0.0 2023-03-08 21:35:35,277 INFO [train.py:898] (3/4) Epoch 7, batch 2450, loss[loss=0.2093, simple_loss=0.2938, pruned_loss=0.06234, over 18494.00 frames. ], tot_loss[loss=0.2129, simple_loss=0.2934, pruned_loss=0.0662, over 3596610.62 frames. ], batch size: 53, lr: 1.59e-02, grad_scale: 8.0 2023-03-08 21:36:20,043 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7649, 4.7597, 4.6868, 4.5539, 4.4627, 4.6225, 4.9734, 4.8149], device='cuda:3'), covar=tensor([0.0073, 0.0065, 0.0080, 0.0095, 0.0080, 0.0095, 0.0107, 0.0130], device='cuda:3'), in_proj_covar=tensor([0.0071, 0.0051, 0.0053, 0.0067, 0.0056, 0.0077, 0.0067, 0.0066], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0002, 0.0002, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-08 21:36:33,634 INFO [train.py:898] (3/4) Epoch 7, batch 2500, loss[loss=0.2104, simple_loss=0.2809, pruned_loss=0.06996, over 17685.00 frames. ], tot_loss[loss=0.2131, simple_loss=0.2931, pruned_loss=0.06649, over 3578883.94 frames. ], batch size: 39, lr: 1.59e-02, grad_scale: 8.0 2023-03-08 21:36:38,999 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.67 vs. limit=2.0 2023-03-08 21:37:06,509 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.422e+02 3.679e+02 4.544e+02 5.517e+02 9.659e+02, threshold=9.088e+02, percent-clipped=5.0 2023-03-08 21:37:31,456 INFO [train.py:898] (3/4) Epoch 7, batch 2550, loss[loss=0.2158, simple_loss=0.3072, pruned_loss=0.06221, over 18367.00 frames. ], tot_loss[loss=0.2132, simple_loss=0.2934, pruned_loss=0.06652, over 3580227.74 frames. ], batch size: 55, lr: 1.59e-02, grad_scale: 8.0 2023-03-08 21:38:16,285 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=24393.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:38:29,983 INFO [train.py:898] (3/4) Epoch 7, batch 2600, loss[loss=0.225, simple_loss=0.3012, pruned_loss=0.07437, over 17000.00 frames. ], tot_loss[loss=0.2128, simple_loss=0.2929, pruned_loss=0.06636, over 3585216.70 frames. ], batch size: 78, lr: 1.59e-02, grad_scale: 4.0 2023-03-08 21:38:32,757 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7324, 3.6166, 3.5659, 3.1105, 3.1744, 2.8934, 2.8743, 3.8119], device='cuda:3'), covar=tensor([0.0036, 0.0073, 0.0057, 0.0102, 0.0089, 0.0148, 0.0148, 0.0041], device='cuda:3'), in_proj_covar=tensor([0.0066, 0.0088, 0.0076, 0.0122, 0.0079, 0.0121, 0.0130, 0.0071], device='cuda:3'), out_proj_covar=tensor([9.1678e-05, 1.3288e-04, 1.1349e-04, 1.9188e-04, 1.1707e-04, 1.8758e-04, 2.0032e-04, 1.0272e-04], device='cuda:3') 2023-03-08 21:39:05,102 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.252e+02 3.780e+02 4.661e+02 5.457e+02 1.160e+03, threshold=9.322e+02, percent-clipped=5.0 2023-03-08 21:39:29,141 INFO [train.py:898] (3/4) Epoch 7, batch 2650, loss[loss=0.2306, simple_loss=0.3103, pruned_loss=0.07545, over 18300.00 frames. ], tot_loss[loss=0.2122, simple_loss=0.2922, pruned_loss=0.06611, over 3589647.31 frames. ], batch size: 54, lr: 1.59e-02, grad_scale: 4.0 2023-03-08 21:40:27,785 INFO [train.py:898] (3/4) Epoch 7, batch 2700, loss[loss=0.212, simple_loss=0.2855, pruned_loss=0.06925, over 18237.00 frames. ], tot_loss[loss=0.2121, simple_loss=0.2919, pruned_loss=0.0661, over 3588887.74 frames. ], batch size: 45, lr: 1.59e-02, grad_scale: 4.0 2023-03-08 21:41:02,502 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.355e+02 3.494e+02 4.469e+02 5.662e+02 1.849e+03, threshold=8.938e+02, percent-clipped=8.0 2023-03-08 21:41:11,348 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.30 vs. limit=5.0 2023-03-08 21:41:17,804 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=24547.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:41:26,597 INFO [train.py:898] (3/4) Epoch 7, batch 2750, loss[loss=0.2191, simple_loss=0.2993, pruned_loss=0.06951, over 16853.00 frames. ], tot_loss[loss=0.212, simple_loss=0.2918, pruned_loss=0.06611, over 3579873.91 frames. ], batch size: 78, lr: 1.59e-02, grad_scale: 4.0 2023-03-08 21:41:36,414 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-08 21:41:43,968 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=24569.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:42:02,438 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-08 21:42:25,724 INFO [train.py:898] (3/4) Epoch 7, batch 2800, loss[loss=0.2014, simple_loss=0.2853, pruned_loss=0.05869, over 18323.00 frames. ], tot_loss[loss=0.2126, simple_loss=0.2922, pruned_loss=0.06647, over 3570298.83 frames. ], batch size: 54, lr: 1.58e-02, grad_scale: 8.0 2023-03-08 21:42:29,411 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=24608.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:42:40,626 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.05 vs. limit=5.0 2023-03-08 21:42:55,545 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=24630.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:43:01,133 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.569e+02 3.885e+02 4.580e+02 5.331e+02 1.147e+03, threshold=9.161e+02, percent-clipped=3.0 2023-03-08 21:43:23,587 INFO [train.py:898] (3/4) Epoch 7, batch 2850, loss[loss=0.1762, simple_loss=0.2541, pruned_loss=0.04915, over 18488.00 frames. ], tot_loss[loss=0.214, simple_loss=0.2932, pruned_loss=0.0674, over 3567842.23 frames. ], batch size: 44, lr: 1.58e-02, grad_scale: 4.0 2023-03-08 21:44:08,963 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=24693.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:44:22,180 INFO [train.py:898] (3/4) Epoch 7, batch 2900, loss[loss=0.1845, simple_loss=0.2688, pruned_loss=0.05008, over 18264.00 frames. ], tot_loss[loss=0.2123, simple_loss=0.2917, pruned_loss=0.06643, over 3575004.05 frames. ], batch size: 47, lr: 1.58e-02, grad_scale: 4.0 2023-03-08 21:44:57,697 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.094e+02 3.800e+02 4.687e+02 5.853e+02 1.844e+03, threshold=9.374e+02, percent-clipped=5.0 2023-03-08 21:45:05,180 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=24741.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:45:20,845 INFO [train.py:898] (3/4) Epoch 7, batch 2950, loss[loss=0.225, simple_loss=0.3114, pruned_loss=0.06927, over 18574.00 frames. ], tot_loss[loss=0.2113, simple_loss=0.2908, pruned_loss=0.06587, over 3580685.60 frames. ], batch size: 54, lr: 1.58e-02, grad_scale: 4.0 2023-03-08 21:45:32,610 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=24765.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:45:51,023 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.87 vs. limit=5.0 2023-03-08 21:46:20,380 INFO [train.py:898] (3/4) Epoch 7, batch 3000, loss[loss=0.2299, simple_loss=0.3112, pruned_loss=0.07431, over 16311.00 frames. ], tot_loss[loss=0.2119, simple_loss=0.2917, pruned_loss=0.06608, over 3583233.47 frames. ], batch size: 94, lr: 1.58e-02, grad_scale: 4.0 2023-03-08 21:46:20,380 INFO [train.py:923] (3/4) Computing validation loss 2023-03-08 21:46:32,363 INFO [train.py:932] (3/4) Epoch 7, validation: loss=0.1689, simple_loss=0.2715, pruned_loss=0.03314, over 944034.00 frames. 2023-03-08 21:46:32,363 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-08 21:46:36,448 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-08 21:46:57,865 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=24826.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:47:08,248 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.797e+02 4.000e+02 4.650e+02 5.894e+02 1.091e+03, threshold=9.301e+02, percent-clipped=1.0 2023-03-08 21:47:23,325 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=24848.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:47:30,759 INFO [train.py:898] (3/4) Epoch 7, batch 3050, loss[loss=0.2611, simple_loss=0.341, pruned_loss=0.09056, over 18497.00 frames. ], tot_loss[loss=0.211, simple_loss=0.2909, pruned_loss=0.06551, over 3590649.31 frames. ], batch size: 59, lr: 1.58e-02, grad_scale: 4.0 2023-03-08 21:47:56,921 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5914, 5.1682, 5.1815, 5.1148, 4.7471, 5.0625, 4.3550, 4.9608], device='cuda:3'), covar=tensor([0.0201, 0.0279, 0.0202, 0.0274, 0.0373, 0.0209, 0.1214, 0.0258], device='cuda:3'), in_proj_covar=tensor([0.0147, 0.0196, 0.0182, 0.0193, 0.0190, 0.0197, 0.0267, 0.0180], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-08 21:48:22,965 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-08 21:48:26,693 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=24903.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:48:28,718 INFO [train.py:898] (3/4) Epoch 7, batch 3100, loss[loss=0.207, simple_loss=0.2951, pruned_loss=0.05949, over 18490.00 frames. ], tot_loss[loss=0.2128, simple_loss=0.2927, pruned_loss=0.06649, over 3586457.70 frames. ], batch size: 51, lr: 1.57e-02, grad_scale: 2.0 2023-03-08 21:48:33,519 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=24909.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:48:52,693 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=24925.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:49:05,431 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.662e+02 4.170e+02 4.880e+02 6.294e+02 1.409e+03, threshold=9.761e+02, percent-clipped=6.0 2023-03-08 21:49:26,002 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.83 vs. limit=2.0 2023-03-08 21:49:27,491 INFO [train.py:898] (3/4) Epoch 7, batch 3150, loss[loss=0.21, simple_loss=0.2936, pruned_loss=0.06321, over 18322.00 frames. ], tot_loss[loss=0.2132, simple_loss=0.2931, pruned_loss=0.06665, over 3591788.32 frames. ], batch size: 54, lr: 1.57e-02, grad_scale: 2.0 2023-03-08 21:49:55,788 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9426, 4.0233, 5.1609, 2.8933, 4.2937, 2.6771, 2.9526, 2.0668], device='cuda:3'), covar=tensor([0.0738, 0.0618, 0.0047, 0.0565, 0.0504, 0.2006, 0.2055, 0.1565], device='cuda:3'), in_proj_covar=tensor([0.0179, 0.0190, 0.0093, 0.0149, 0.0204, 0.0231, 0.0242, 0.0191], device='cuda:3'), out_proj_covar=tensor([1.6241e-04, 1.7848e-04, 8.9845e-05, 1.3924e-04, 1.9218e-04, 2.1764e-04, 2.2861e-04, 1.8226e-04], device='cuda:3') 2023-03-08 21:50:15,260 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=24995.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:50:26,650 INFO [train.py:898] (3/4) Epoch 7, batch 3200, loss[loss=0.2023, simple_loss=0.293, pruned_loss=0.05575, over 18349.00 frames. ], tot_loss[loss=0.213, simple_loss=0.2931, pruned_loss=0.06646, over 3599485.97 frames. ], batch size: 55, lr: 1.57e-02, grad_scale: 4.0 2023-03-08 21:51:03,253 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.251e+02 3.758e+02 4.426e+02 5.545e+02 1.381e+03, threshold=8.852e+02, percent-clipped=2.0 2023-03-08 21:51:14,646 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-08 21:51:21,326 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7406, 5.4089, 5.4269, 5.3470, 4.9914, 5.2349, 4.6413, 5.2304], device='cuda:3'), covar=tensor([0.0208, 0.0256, 0.0159, 0.0232, 0.0319, 0.0214, 0.0992, 0.0218], device='cuda:3'), in_proj_covar=tensor([0.0146, 0.0191, 0.0178, 0.0189, 0.0188, 0.0193, 0.0261, 0.0178], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0005, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-08 21:51:25,662 INFO [train.py:898] (3/4) Epoch 7, batch 3250, loss[loss=0.1829, simple_loss=0.2579, pruned_loss=0.05395, over 18408.00 frames. ], tot_loss[loss=0.2121, simple_loss=0.2924, pruned_loss=0.06585, over 3607167.43 frames. ], batch size: 42, lr: 1.57e-02, grad_scale: 4.0 2023-03-08 21:51:27,234 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=25056.0, num_to_drop=1, layers_to_drop={3} 2023-03-08 21:52:24,639 INFO [train.py:898] (3/4) Epoch 7, batch 3300, loss[loss=0.2788, simple_loss=0.3367, pruned_loss=0.1104, over 12267.00 frames. ], tot_loss[loss=0.2115, simple_loss=0.2919, pruned_loss=0.06558, over 3592284.80 frames. ], batch size: 130, lr: 1.57e-02, grad_scale: 4.0 2023-03-08 21:52:42,791 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=25121.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:53:01,192 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.469e+02 3.921e+02 4.598e+02 5.927e+02 2.644e+03, threshold=9.195e+02, percent-clipped=9.0 2023-03-08 21:53:16,836 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5373, 6.1171, 5.5390, 5.8968, 5.6515, 5.6374, 6.1840, 6.0984], device='cuda:3'), covar=tensor([0.1126, 0.0573, 0.0402, 0.0652, 0.1230, 0.0611, 0.0433, 0.0554], device='cuda:3'), in_proj_covar=tensor([0.0460, 0.0379, 0.0285, 0.0412, 0.0561, 0.0414, 0.0507, 0.0388], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-08 21:53:23,210 INFO [train.py:898] (3/4) Epoch 7, batch 3350, loss[loss=0.2035, simple_loss=0.2911, pruned_loss=0.05796, over 18256.00 frames. ], tot_loss[loss=0.2117, simple_loss=0.2918, pruned_loss=0.06578, over 3583018.65 frames. ], batch size: 47, lr: 1.57e-02, grad_scale: 4.0 2023-03-08 21:53:41,901 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.44 vs. limit=5.0 2023-03-08 21:54:17,165 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.83 vs. limit=2.0 2023-03-08 21:54:19,426 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=25203.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:54:20,406 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=25204.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:54:21,382 INFO [train.py:898] (3/4) Epoch 7, batch 3400, loss[loss=0.1794, simple_loss=0.2492, pruned_loss=0.05477, over 18371.00 frames. ], tot_loss[loss=0.2115, simple_loss=0.2917, pruned_loss=0.06568, over 3590314.43 frames. ], batch size: 42, lr: 1.57e-02, grad_scale: 4.0 2023-03-08 21:54:31,963 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3031, 4.3587, 2.2068, 4.3570, 5.3862, 2.7213, 3.8168, 3.7446], device='cuda:3'), covar=tensor([0.0057, 0.0712, 0.1556, 0.0481, 0.0029, 0.1147, 0.0603, 0.0730], device='cuda:3'), in_proj_covar=tensor([0.0090, 0.0195, 0.0179, 0.0179, 0.0077, 0.0167, 0.0190, 0.0186], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-08 21:54:44,246 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=25225.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:54:57,014 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.655e+02 3.734e+02 4.394e+02 5.547e+02 1.008e+03, threshold=8.789e+02, percent-clipped=3.0 2023-03-08 21:55:14,404 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=25251.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:55:19,393 INFO [train.py:898] (3/4) Epoch 7, batch 3450, loss[loss=0.2178, simple_loss=0.296, pruned_loss=0.06979, over 17807.00 frames. ], tot_loss[loss=0.2122, simple_loss=0.292, pruned_loss=0.06619, over 3560006.57 frames. ], batch size: 70, lr: 1.56e-02, grad_scale: 4.0 2023-03-08 21:55:39,658 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=25273.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:55:55,658 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7897, 4.7881, 4.9445, 4.5432, 4.5950, 4.7568, 5.1035, 5.0026], device='cuda:3'), covar=tensor([0.0060, 0.0073, 0.0073, 0.0102, 0.0075, 0.0088, 0.0113, 0.0101], device='cuda:3'), in_proj_covar=tensor([0.0070, 0.0051, 0.0052, 0.0067, 0.0055, 0.0075, 0.0065, 0.0063], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-08 21:56:17,142 INFO [train.py:898] (3/4) Epoch 7, batch 3500, loss[loss=0.1975, simple_loss=0.2848, pruned_loss=0.05511, over 18410.00 frames. ], tot_loss[loss=0.2106, simple_loss=0.2906, pruned_loss=0.06532, over 3577187.64 frames. ], batch size: 52, lr: 1.56e-02, grad_scale: 2.0 2023-03-08 21:56:53,685 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.862e+02 4.141e+02 4.748e+02 6.314e+02 1.477e+03, threshold=9.496e+02, percent-clipped=11.0 2023-03-08 21:57:08,523 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=25351.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 21:57:12,754 INFO [train.py:898] (3/4) Epoch 7, batch 3550, loss[loss=0.2523, simple_loss=0.3124, pruned_loss=0.0961, over 12705.00 frames. ], tot_loss[loss=0.2111, simple_loss=0.2913, pruned_loss=0.06548, over 3573136.38 frames. ], batch size: 129, lr: 1.56e-02, grad_scale: 2.0 2023-03-08 21:57:51,085 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5194, 2.8394, 4.1093, 3.9254, 2.3873, 4.2863, 3.9409, 2.6875], device='cuda:3'), covar=tensor([0.0369, 0.1094, 0.0140, 0.0164, 0.1448, 0.0162, 0.0295, 0.1004], device='cuda:3'), in_proj_covar=tensor([0.0174, 0.0207, 0.0124, 0.0125, 0.0200, 0.0170, 0.0182, 0.0187], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 21:57:57,442 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.2826, 5.6269, 3.0002, 5.3773, 5.2436, 5.6524, 5.4261, 3.4117], device='cuda:3'), covar=tensor([0.0129, 0.0039, 0.0616, 0.0054, 0.0065, 0.0046, 0.0074, 0.0648], device='cuda:3'), in_proj_covar=tensor([0.0070, 0.0059, 0.0085, 0.0073, 0.0068, 0.0057, 0.0071, 0.0088], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0005, 0.0004, 0.0004, 0.0003, 0.0004, 0.0005], device='cuda:3') 2023-03-08 21:58:07,629 INFO [train.py:898] (3/4) Epoch 7, batch 3600, loss[loss=0.1968, simple_loss=0.2864, pruned_loss=0.05358, over 18485.00 frames. ], tot_loss[loss=0.2115, simple_loss=0.2915, pruned_loss=0.06575, over 3580955.01 frames. ], batch size: 51, lr: 1.56e-02, grad_scale: 4.0 2023-03-08 21:58:24,848 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=25421.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 21:58:25,272 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.96 vs. limit=2.0 2023-03-08 21:58:31,899 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=25428.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 21:58:40,128 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.475e+02 3.916e+02 4.844e+02 6.068e+02 1.506e+03, threshold=9.689e+02, percent-clipped=7.0 2023-03-08 21:59:12,580 INFO [train.py:898] (3/4) Epoch 8, batch 0, loss[loss=0.2098, simple_loss=0.2852, pruned_loss=0.06717, over 18398.00 frames. ], tot_loss[loss=0.2098, simple_loss=0.2852, pruned_loss=0.06717, over 18398.00 frames. ], batch size: 48, lr: 1.47e-02, grad_scale: 8.0 2023-03-08 21:59:12,580 INFO [train.py:923] (3/4) Computing validation loss 2023-03-08 21:59:24,297 INFO [train.py:932] (3/4) Epoch 8, validation: loss=0.17, simple_loss=0.2728, pruned_loss=0.03358, over 944034.00 frames. 2023-03-08 21:59:24,298 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-08 21:59:59,983 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=25469.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:00:06,440 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-08 22:00:22,359 INFO [train.py:898] (3/4) Epoch 8, batch 50, loss[loss=0.2042, simple_loss=0.2911, pruned_loss=0.05867, over 18305.00 frames. ], tot_loss[loss=0.2108, simple_loss=0.2926, pruned_loss=0.0645, over 813079.95 frames. ], batch size: 54, lr: 1.47e-02, grad_scale: 8.0 2023-03-08 22:00:22,773 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=25489.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 22:00:25,201 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.42 vs. limit=2.0 2023-03-08 22:00:39,174 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=25504.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:01:18,379 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.473e+02 3.465e+02 4.274e+02 5.083e+02 8.127e+02, threshold=8.548e+02, percent-clipped=0.0 2023-03-08 22:01:20,754 INFO [train.py:898] (3/4) Epoch 8, batch 100, loss[loss=0.1762, simple_loss=0.2594, pruned_loss=0.04647, over 17662.00 frames. ], tot_loss[loss=0.2105, simple_loss=0.2913, pruned_loss=0.06491, over 1425605.39 frames. ], batch size: 39, lr: 1.47e-02, grad_scale: 8.0 2023-03-08 22:01:35,578 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=25552.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:02:19,593 INFO [train.py:898] (3/4) Epoch 8, batch 150, loss[loss=0.1773, simple_loss=0.2568, pruned_loss=0.04885, over 18247.00 frames. ], tot_loss[loss=0.2078, simple_loss=0.2888, pruned_loss=0.06337, over 1916022.55 frames. ], batch size: 45, lr: 1.46e-02, grad_scale: 8.0 2023-03-08 22:02:34,565 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2506, 5.1755, 5.3710, 5.2475, 5.2265, 5.9814, 5.5904, 5.5225], device='cuda:3'), covar=tensor([0.0822, 0.0574, 0.0629, 0.0584, 0.1447, 0.0639, 0.0520, 0.1418], device='cuda:3'), in_proj_covar=tensor([0.0273, 0.0207, 0.0214, 0.0216, 0.0257, 0.0304, 0.0200, 0.0300], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 22:03:16,569 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.739e+02 3.654e+02 4.521e+02 5.323e+02 1.367e+03, threshold=9.043e+02, percent-clipped=1.0 2023-03-08 22:03:18,861 INFO [train.py:898] (3/4) Epoch 8, batch 200, loss[loss=0.2035, simple_loss=0.2854, pruned_loss=0.06079, over 18268.00 frames. ], tot_loss[loss=0.2083, simple_loss=0.2892, pruned_loss=0.0637, over 2288835.74 frames. ], batch size: 47, lr: 1.46e-02, grad_scale: 8.0 2023-03-08 22:03:32,345 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=25651.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 22:03:36,019 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.3938, 3.3031, 4.6936, 3.9711, 2.8513, 2.7051, 3.5930, 4.7177], device='cuda:3'), covar=tensor([0.0845, 0.1385, 0.0091, 0.0350, 0.1007, 0.1140, 0.0520, 0.0177], device='cuda:3'), in_proj_covar=tensor([0.0133, 0.0217, 0.0082, 0.0150, 0.0168, 0.0171, 0.0158, 0.0111], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-08 22:04:08,498 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4152, 5.9602, 5.4483, 5.5704, 5.3912, 5.4522, 5.9615, 5.9080], device='cuda:3'), covar=tensor([0.1085, 0.0572, 0.0451, 0.0696, 0.1373, 0.0619, 0.0539, 0.0619], device='cuda:3'), in_proj_covar=tensor([0.0468, 0.0376, 0.0287, 0.0411, 0.0566, 0.0411, 0.0511, 0.0397], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-08 22:04:17,980 INFO [train.py:898] (3/4) Epoch 8, batch 250, loss[loss=0.1954, simple_loss=0.2725, pruned_loss=0.05916, over 18558.00 frames. ], tot_loss[loss=0.2091, simple_loss=0.2894, pruned_loss=0.06438, over 2578795.11 frames. ], batch size: 49, lr: 1.46e-02, grad_scale: 8.0 2023-03-08 22:04:28,425 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1741, 5.1233, 4.7125, 5.1142, 5.1558, 4.5885, 5.0340, 4.8033], device='cuda:3'), covar=tensor([0.0422, 0.0460, 0.1428, 0.0641, 0.0489, 0.0389, 0.0373, 0.0741], device='cuda:3'), in_proj_covar=tensor([0.0345, 0.0400, 0.0540, 0.0312, 0.0287, 0.0371, 0.0385, 0.0493], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-08 22:04:29,459 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=25699.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:05:14,436 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.583e+02 3.732e+02 4.597e+02 5.455e+02 9.874e+02, threshold=9.193e+02, percent-clipped=1.0 2023-03-08 22:05:17,284 INFO [train.py:898] (3/4) Epoch 8, batch 300, loss[loss=0.1801, simple_loss=0.2593, pruned_loss=0.05046, over 18442.00 frames. ], tot_loss[loss=0.208, simple_loss=0.2885, pruned_loss=0.06369, over 2809875.67 frames. ], batch size: 43, lr: 1.46e-02, grad_scale: 8.0 2023-03-08 22:06:10,057 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=25784.0, num_to_drop=1, layers_to_drop={3} 2023-03-08 22:06:15,833 INFO [train.py:898] (3/4) Epoch 8, batch 350, loss[loss=0.2112, simple_loss=0.2939, pruned_loss=0.06424, over 18472.00 frames. ], tot_loss[loss=0.2081, simple_loss=0.2892, pruned_loss=0.06352, over 2988823.70 frames. ], batch size: 59, lr: 1.46e-02, grad_scale: 8.0 2023-03-08 22:06:39,075 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5513, 4.7359, 4.6066, 4.6081, 4.5733, 5.2486, 4.9249, 4.6283], device='cuda:3'), covar=tensor([0.0908, 0.0815, 0.0737, 0.0710, 0.1241, 0.0807, 0.0617, 0.1555], device='cuda:3'), in_proj_covar=tensor([0.0271, 0.0211, 0.0213, 0.0214, 0.0255, 0.0306, 0.0201, 0.0301], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 22:07:11,597 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.442e+02 3.462e+02 4.130e+02 5.163e+02 1.142e+03, threshold=8.260e+02, percent-clipped=1.0 2023-03-08 22:07:14,543 INFO [train.py:898] (3/4) Epoch 8, batch 400, loss[loss=0.1893, simple_loss=0.2743, pruned_loss=0.05217, over 18489.00 frames. ], tot_loss[loss=0.2079, simple_loss=0.2887, pruned_loss=0.06354, over 3110834.95 frames. ], batch size: 47, lr: 1.46e-02, grad_scale: 8.0 2023-03-08 22:07:22,001 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4883, 3.5115, 4.8671, 4.1901, 2.8576, 2.7812, 3.8364, 4.8485], device='cuda:3'), covar=tensor([0.0853, 0.1357, 0.0085, 0.0308, 0.1019, 0.1126, 0.0472, 0.0153], device='cuda:3'), in_proj_covar=tensor([0.0135, 0.0223, 0.0084, 0.0154, 0.0172, 0.0174, 0.0161, 0.0114], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0001, 0.0002, 0.0003, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-08 22:07:37,272 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=25858.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 22:07:41,685 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0439, 5.1491, 2.7561, 4.9234, 4.6412, 5.1669, 4.8502, 2.6304], device='cuda:3'), covar=tensor([0.0153, 0.0068, 0.0699, 0.0084, 0.0085, 0.0062, 0.0113, 0.0976], device='cuda:3'), in_proj_covar=tensor([0.0070, 0.0057, 0.0084, 0.0071, 0.0067, 0.0057, 0.0070, 0.0086], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0004, 0.0004, 0.0003, 0.0003, 0.0004, 0.0005], device='cuda:3') 2023-03-08 22:08:13,028 INFO [train.py:898] (3/4) Epoch 8, batch 450, loss[loss=0.2088, simple_loss=0.3009, pruned_loss=0.05829, over 17165.00 frames. ], tot_loss[loss=0.2074, simple_loss=0.2882, pruned_loss=0.06325, over 3226313.49 frames. ], batch size: 78, lr: 1.46e-02, grad_scale: 8.0 2023-03-08 22:08:28,241 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0333, 3.3370, 2.5859, 3.4297, 4.1082, 2.6343, 3.2336, 3.3115], device='cuda:3'), covar=tensor([0.0089, 0.0990, 0.1130, 0.0500, 0.0056, 0.0984, 0.0631, 0.0670], device='cuda:3'), in_proj_covar=tensor([0.0093, 0.0198, 0.0180, 0.0178, 0.0077, 0.0167, 0.0191, 0.0190], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-08 22:08:48,657 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=25919.0, num_to_drop=1, layers_to_drop={3} 2023-03-08 22:08:56,438 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9017, 5.5179, 5.5634, 5.5183, 5.1944, 5.4703, 4.7936, 5.3426], device='cuda:3'), covar=tensor([0.0202, 0.0245, 0.0145, 0.0191, 0.0295, 0.0163, 0.1035, 0.0239], device='cuda:3'), in_proj_covar=tensor([0.0151, 0.0197, 0.0180, 0.0197, 0.0191, 0.0199, 0.0264, 0.0185], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-08 22:09:09,607 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.689e+02 3.778e+02 4.755e+02 5.799e+02 1.474e+03, threshold=9.510e+02, percent-clipped=4.0 2023-03-08 22:09:11,983 INFO [train.py:898] (3/4) Epoch 8, batch 500, loss[loss=0.1782, simple_loss=0.2568, pruned_loss=0.04987, over 18435.00 frames. ], tot_loss[loss=0.207, simple_loss=0.2879, pruned_loss=0.06308, over 3300921.93 frames. ], batch size: 43, lr: 1.45e-02, grad_scale: 8.0 2023-03-08 22:09:16,965 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0017, 4.1026, 2.2940, 4.1407, 5.0761, 2.4073, 3.2772, 3.6033], device='cuda:3'), covar=tensor([0.0070, 0.0819, 0.1417, 0.0474, 0.0040, 0.1222, 0.0736, 0.0739], device='cuda:3'), in_proj_covar=tensor([0.0093, 0.0196, 0.0178, 0.0178, 0.0077, 0.0166, 0.0190, 0.0188], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-08 22:09:31,559 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1381, 4.9989, 5.1100, 4.9877, 5.0087, 5.7303, 5.3422, 5.0559], device='cuda:3'), covar=tensor([0.0956, 0.0734, 0.0777, 0.0731, 0.1532, 0.0892, 0.0706, 0.1730], device='cuda:3'), in_proj_covar=tensor([0.0273, 0.0212, 0.0213, 0.0215, 0.0255, 0.0310, 0.0204, 0.0305], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 22:09:42,032 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.3656, 4.4759, 4.5032, 4.2987, 4.2535, 4.2553, 4.6676, 4.6405], device='cuda:3'), covar=tensor([0.0070, 0.0081, 0.0076, 0.0090, 0.0080, 0.0125, 0.0078, 0.0095], device='cuda:3'), in_proj_covar=tensor([0.0071, 0.0052, 0.0054, 0.0068, 0.0056, 0.0077, 0.0065, 0.0064], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0002, 0.0002, 0.0003, 0.0002, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-08 22:10:10,518 INFO [train.py:898] (3/4) Epoch 8, batch 550, loss[loss=0.2283, simple_loss=0.3104, pruned_loss=0.07315, over 18027.00 frames. ], tot_loss[loss=0.2077, simple_loss=0.2889, pruned_loss=0.06327, over 3364751.00 frames. ], batch size: 65, lr: 1.45e-02, grad_scale: 8.0 2023-03-08 22:10:32,334 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8837, 5.4266, 5.5141, 5.4368, 5.1156, 5.3839, 4.6696, 5.3602], device='cuda:3'), covar=tensor([0.0221, 0.0246, 0.0149, 0.0223, 0.0301, 0.0189, 0.1132, 0.0235], device='cuda:3'), in_proj_covar=tensor([0.0152, 0.0198, 0.0181, 0.0198, 0.0192, 0.0200, 0.0264, 0.0185], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-08 22:10:40,738 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8617, 2.3658, 2.3218, 2.3545, 2.9197, 2.7674, 2.5406, 2.5932], device='cuda:3'), covar=tensor([0.0256, 0.0293, 0.0445, 0.0359, 0.0185, 0.0132, 0.0369, 0.0260], device='cuda:3'), in_proj_covar=tensor([0.0107, 0.0088, 0.0137, 0.0117, 0.0086, 0.0071, 0.0113, 0.0111], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 22:11:10,803 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.250e+02 3.451e+02 4.216e+02 4.967e+02 1.068e+03, threshold=8.432e+02, percent-clipped=2.0 2023-03-08 22:11:13,148 INFO [train.py:898] (3/4) Epoch 8, batch 600, loss[loss=0.2414, simple_loss=0.3144, pruned_loss=0.08415, over 17218.00 frames. ], tot_loss[loss=0.2068, simple_loss=0.288, pruned_loss=0.06286, over 3419900.56 frames. ], batch size: 78, lr: 1.45e-02, grad_scale: 8.0 2023-03-08 22:11:20,949 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.3691, 3.6169, 4.9976, 4.2591, 3.0368, 2.6393, 4.2116, 5.1437], device='cuda:3'), covar=tensor([0.0852, 0.1345, 0.0065, 0.0277, 0.0887, 0.1138, 0.0351, 0.0105], device='cuda:3'), in_proj_covar=tensor([0.0133, 0.0221, 0.0084, 0.0151, 0.0170, 0.0171, 0.0158, 0.0114], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-08 22:12:06,689 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=26084.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 22:12:12,075 INFO [train.py:898] (3/4) Epoch 8, batch 650, loss[loss=0.2242, simple_loss=0.3103, pruned_loss=0.06906, over 18498.00 frames. ], tot_loss[loss=0.2053, simple_loss=0.2863, pruned_loss=0.0621, over 3472329.63 frames. ], batch size: 53, lr: 1.45e-02, grad_scale: 8.0 2023-03-08 22:12:28,508 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=26102.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:13:03,430 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=26132.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 22:13:08,680 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.333e+02 3.685e+02 4.568e+02 5.569e+02 1.081e+03, threshold=9.136e+02, percent-clipped=5.0 2023-03-08 22:13:11,015 INFO [train.py:898] (3/4) Epoch 8, batch 700, loss[loss=0.2027, simple_loss=0.282, pruned_loss=0.06168, over 18497.00 frames. ], tot_loss[loss=0.2058, simple_loss=0.2869, pruned_loss=0.06241, over 3497482.15 frames. ], batch size: 51, lr: 1.45e-02, grad_scale: 8.0 2023-03-08 22:13:20,199 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-08 22:13:39,823 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4140, 5.3428, 4.8839, 5.2772, 5.3213, 4.7971, 5.2789, 4.9742], device='cuda:3'), covar=tensor([0.0371, 0.0384, 0.1333, 0.0768, 0.0443, 0.0366, 0.0375, 0.0769], device='cuda:3'), in_proj_covar=tensor([0.0350, 0.0401, 0.0548, 0.0323, 0.0295, 0.0376, 0.0395, 0.0511], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-08 22:13:39,889 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=26163.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:13:46,043 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=26168.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:13:50,445 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5717, 5.4748, 5.0346, 5.4465, 5.4502, 4.9239, 5.4296, 5.1422], device='cuda:3'), covar=tensor([0.0310, 0.0391, 0.1216, 0.0710, 0.0454, 0.0366, 0.0320, 0.0696], device='cuda:3'), in_proj_covar=tensor([0.0349, 0.0401, 0.0546, 0.0322, 0.0295, 0.0375, 0.0394, 0.0509], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-08 22:14:09,240 INFO [train.py:898] (3/4) Epoch 8, batch 750, loss[loss=0.2166, simple_loss=0.3012, pruned_loss=0.06605, over 18392.00 frames. ], tot_loss[loss=0.2065, simple_loss=0.2875, pruned_loss=0.06273, over 3520816.55 frames. ], batch size: 52, lr: 1.45e-02, grad_scale: 8.0 2023-03-08 22:14:39,860 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=26214.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 22:14:57,290 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=26229.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:15:05,903 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.506e+02 3.613e+02 4.168e+02 4.953e+02 1.109e+03, threshold=8.337e+02, percent-clipped=3.0 2023-03-08 22:15:06,734 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-08 22:15:08,151 INFO [train.py:898] (3/4) Epoch 8, batch 800, loss[loss=0.1956, simple_loss=0.2714, pruned_loss=0.05991, over 18492.00 frames. ], tot_loss[loss=0.2072, simple_loss=0.2885, pruned_loss=0.06294, over 3528167.13 frames. ], batch size: 47, lr: 1.45e-02, grad_scale: 8.0 2023-03-08 22:15:21,618 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=26250.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:15:22,706 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.2398, 5.4274, 3.0152, 5.1717, 4.9927, 5.4361, 5.2940, 2.7566], device='cuda:3'), covar=tensor([0.0150, 0.0051, 0.0667, 0.0058, 0.0084, 0.0073, 0.0076, 0.0972], device='cuda:3'), in_proj_covar=tensor([0.0071, 0.0058, 0.0086, 0.0072, 0.0068, 0.0057, 0.0072, 0.0088], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0005, 0.0004, 0.0003, 0.0003, 0.0004, 0.0005], device='cuda:3') 2023-03-08 22:16:07,214 INFO [train.py:898] (3/4) Epoch 8, batch 850, loss[loss=0.1949, simple_loss=0.2745, pruned_loss=0.05762, over 18357.00 frames. ], tot_loss[loss=0.207, simple_loss=0.2883, pruned_loss=0.06287, over 3537143.75 frames. ], batch size: 46, lr: 1.45e-02, grad_scale: 8.0 2023-03-08 22:16:33,954 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=26311.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:16:51,145 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2037, 4.2802, 2.3278, 4.2959, 5.2468, 2.5778, 3.6424, 3.6217], device='cuda:3'), covar=tensor([0.0068, 0.0937, 0.1596, 0.0491, 0.0038, 0.1278, 0.0697, 0.0890], device='cuda:3'), in_proj_covar=tensor([0.0094, 0.0200, 0.0181, 0.0180, 0.0077, 0.0169, 0.0193, 0.0190], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-08 22:17:04,267 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.475e+02 3.702e+02 4.449e+02 5.668e+02 1.488e+03, threshold=8.898e+02, percent-clipped=3.0 2023-03-08 22:17:06,540 INFO [train.py:898] (3/4) Epoch 8, batch 900, loss[loss=0.2209, simple_loss=0.2908, pruned_loss=0.07554, over 18389.00 frames. ], tot_loss[loss=0.2075, simple_loss=0.2888, pruned_loss=0.06305, over 3533613.68 frames. ], batch size: 46, lr: 1.44e-02, grad_scale: 8.0 2023-03-08 22:18:06,764 INFO [train.py:898] (3/4) Epoch 8, batch 950, loss[loss=0.223, simple_loss=0.3041, pruned_loss=0.07102, over 16197.00 frames. ], tot_loss[loss=0.2065, simple_loss=0.2875, pruned_loss=0.06278, over 3545239.71 frames. ], batch size: 94, lr: 1.44e-02, grad_scale: 8.0 2023-03-08 22:18:17,758 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.97 vs. limit=5.0 2023-03-08 22:18:37,097 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=26414.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:18:39,490 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5682, 3.5639, 1.7500, 4.4787, 3.0645, 4.5934, 2.3368, 4.0550], device='cuda:3'), covar=tensor([0.0505, 0.0732, 0.1604, 0.0372, 0.0858, 0.0233, 0.1109, 0.0352], device='cuda:3'), in_proj_covar=tensor([0.0172, 0.0198, 0.0171, 0.0202, 0.0171, 0.0194, 0.0178, 0.0172], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002, 0.0002], device='cuda:3') 2023-03-08 22:18:58,698 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-08 22:19:04,446 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.644e+02 3.699e+02 4.361e+02 5.008e+02 1.213e+03, threshold=8.721e+02, percent-clipped=3.0 2023-03-08 22:19:06,709 INFO [train.py:898] (3/4) Epoch 8, batch 1000, loss[loss=0.1832, simple_loss=0.2631, pruned_loss=0.05164, over 18253.00 frames. ], tot_loss[loss=0.2055, simple_loss=0.2864, pruned_loss=0.06233, over 3566775.23 frames. ], batch size: 47, lr: 1.44e-02, grad_scale: 8.0 2023-03-08 22:19:29,033 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=26458.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:19:38,018 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7077, 4.6982, 4.7905, 4.5448, 4.5373, 4.5371, 4.9681, 4.9033], device='cuda:3'), covar=tensor([0.0066, 0.0070, 0.0054, 0.0094, 0.0076, 0.0111, 0.0066, 0.0102], device='cuda:3'), in_proj_covar=tensor([0.0071, 0.0051, 0.0052, 0.0066, 0.0055, 0.0077, 0.0063, 0.0063], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0002, 0.0002, 0.0003, 0.0002, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-08 22:19:50,706 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=26475.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:20:06,546 INFO [train.py:898] (3/4) Epoch 8, batch 1050, loss[loss=0.2029, simple_loss=0.2822, pruned_loss=0.06178, over 18395.00 frames. ], tot_loss[loss=0.2054, simple_loss=0.286, pruned_loss=0.06243, over 3576857.19 frames. ], batch size: 48, lr: 1.44e-02, grad_scale: 8.0 2023-03-08 22:20:35,307 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=26514.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 22:20:47,862 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=26524.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:21:03,506 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.558e+02 3.535e+02 4.269e+02 5.489e+02 1.390e+03, threshold=8.539e+02, percent-clipped=8.0 2023-03-08 22:21:05,721 INFO [train.py:898] (3/4) Epoch 8, batch 1100, loss[loss=0.2532, simple_loss=0.3329, pruned_loss=0.08674, over 18148.00 frames. ], tot_loss[loss=0.2054, simple_loss=0.2862, pruned_loss=0.06235, over 3575796.25 frames. ], batch size: 62, lr: 1.44e-02, grad_scale: 8.0 2023-03-08 22:21:32,867 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=26562.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 22:21:43,586 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7590, 5.3142, 5.2894, 5.1243, 4.8979, 5.2420, 4.5855, 5.1427], device='cuda:3'), covar=tensor([0.0215, 0.0238, 0.0188, 0.0311, 0.0312, 0.0196, 0.1041, 0.0267], device='cuda:3'), in_proj_covar=tensor([0.0149, 0.0196, 0.0181, 0.0199, 0.0190, 0.0197, 0.0259, 0.0184], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-08 22:22:05,753 INFO [train.py:898] (3/4) Epoch 8, batch 1150, loss[loss=0.2009, simple_loss=0.2811, pruned_loss=0.06041, over 18380.00 frames. ], tot_loss[loss=0.2056, simple_loss=0.2862, pruned_loss=0.06247, over 3580357.08 frames. ], batch size: 52, lr: 1.44e-02, grad_scale: 8.0 2023-03-08 22:22:24,920 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=26606.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:22:38,834 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4055, 5.0403, 5.5547, 5.4631, 5.2138, 6.1062, 5.7307, 5.2963], device='cuda:3'), covar=tensor([0.0885, 0.0552, 0.0666, 0.0533, 0.1311, 0.0647, 0.0472, 0.1463], device='cuda:3'), in_proj_covar=tensor([0.0277, 0.0211, 0.0216, 0.0217, 0.0258, 0.0311, 0.0205, 0.0299], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 22:22:47,791 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-08 22:23:02,044 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.150e+02 3.587e+02 4.399e+02 5.427e+02 1.423e+03, threshold=8.799e+02, percent-clipped=5.0 2023-03-08 22:23:04,976 INFO [train.py:898] (3/4) Epoch 8, batch 1200, loss[loss=0.2127, simple_loss=0.2941, pruned_loss=0.06561, over 18289.00 frames. ], tot_loss[loss=0.2066, simple_loss=0.2873, pruned_loss=0.06288, over 3578065.12 frames. ], batch size: 49, lr: 1.44e-02, grad_scale: 8.0 2023-03-08 22:23:33,274 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.99 vs. limit=2.0 2023-03-08 22:24:03,498 INFO [train.py:898] (3/4) Epoch 8, batch 1250, loss[loss=0.1893, simple_loss=0.2757, pruned_loss=0.05148, over 18252.00 frames. ], tot_loss[loss=0.2065, simple_loss=0.2872, pruned_loss=0.06292, over 3579652.56 frames. ], batch size: 47, lr: 1.43e-02, grad_scale: 8.0 2023-03-08 22:24:59,402 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.195e+02 3.433e+02 4.104e+02 5.033e+02 1.173e+03, threshold=8.208e+02, percent-clipped=2.0 2023-03-08 22:25:02,168 INFO [train.py:898] (3/4) Epoch 8, batch 1300, loss[loss=0.2173, simple_loss=0.2852, pruned_loss=0.07471, over 18271.00 frames. ], tot_loss[loss=0.2071, simple_loss=0.2879, pruned_loss=0.06314, over 3582537.13 frames. ], batch size: 45, lr: 1.43e-02, grad_scale: 8.0 2023-03-08 22:25:24,938 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=26758.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:25:38,725 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=26770.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:26:00,818 INFO [train.py:898] (3/4) Epoch 8, batch 1350, loss[loss=0.2279, simple_loss=0.309, pruned_loss=0.0734, over 17186.00 frames. ], tot_loss[loss=0.2066, simple_loss=0.2875, pruned_loss=0.06286, over 3591335.94 frames. ], batch size: 78, lr: 1.43e-02, grad_scale: 8.0 2023-03-08 22:26:21,902 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=26806.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:26:26,688 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6267, 3.5173, 3.2534, 2.9698, 3.4185, 2.6144, 2.6071, 3.6834], device='cuda:3'), covar=tensor([0.0033, 0.0054, 0.0087, 0.0116, 0.0065, 0.0151, 0.0172, 0.0047], device='cuda:3'), in_proj_covar=tensor([0.0069, 0.0095, 0.0087, 0.0133, 0.0085, 0.0130, 0.0138, 0.0074], device='cuda:3'), out_proj_covar=tensor([9.4317e-05, 1.4178e-04, 1.2802e-04, 2.0583e-04, 1.2478e-04, 1.9851e-04, 2.1036e-04, 1.0686e-04], device='cuda:3') 2023-03-08 22:26:38,202 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-08 22:26:42,366 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=26824.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:26:57,540 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.883e+02 3.529e+02 4.433e+02 5.428e+02 1.307e+03, threshold=8.866e+02, percent-clipped=5.0 2023-03-08 22:27:00,014 INFO [train.py:898] (3/4) Epoch 8, batch 1400, loss[loss=0.1996, simple_loss=0.283, pruned_loss=0.05805, over 18550.00 frames. ], tot_loss[loss=0.2061, simple_loss=0.2871, pruned_loss=0.06257, over 3597030.28 frames. ], batch size: 49, lr: 1.43e-02, grad_scale: 8.0 2023-03-08 22:27:06,013 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-08 22:27:39,752 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=26872.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:27:56,678 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6534, 3.5037, 3.4447, 2.9291, 3.3481, 2.8049, 2.6072, 3.5865], device='cuda:3'), covar=tensor([0.0031, 0.0049, 0.0065, 0.0105, 0.0072, 0.0124, 0.0148, 0.0069], device='cuda:3'), in_proj_covar=tensor([0.0067, 0.0092, 0.0084, 0.0128, 0.0083, 0.0126, 0.0135, 0.0073], device='cuda:3'), out_proj_covar=tensor([9.1635e-05, 1.3606e-04, 1.2215e-04, 1.9842e-04, 1.2149e-04, 1.9237e-04, 2.0494e-04, 1.0500e-04], device='cuda:3') 2023-03-08 22:27:59,674 INFO [train.py:898] (3/4) Epoch 8, batch 1450, loss[loss=0.2151, simple_loss=0.2964, pruned_loss=0.06688, over 18364.00 frames. ], tot_loss[loss=0.2067, simple_loss=0.2876, pruned_loss=0.0629, over 3592753.15 frames. ], batch size: 50, lr: 1.43e-02, grad_scale: 8.0 2023-03-08 22:28:18,367 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=26904.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 22:28:21,058 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=26906.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:28:38,803 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.73 vs. limit=2.0 2023-03-08 22:28:56,609 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.356e+02 3.688e+02 4.427e+02 5.301e+02 1.390e+03, threshold=8.853e+02, percent-clipped=1.0 2023-03-08 22:28:58,820 INFO [train.py:898] (3/4) Epoch 8, batch 1500, loss[loss=0.2257, simple_loss=0.3052, pruned_loss=0.07314, over 13024.00 frames. ], tot_loss[loss=0.2076, simple_loss=0.2885, pruned_loss=0.06331, over 3572788.66 frames. ], batch size: 130, lr: 1.43e-02, grad_scale: 8.0 2023-03-08 22:29:12,624 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=26950.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:29:17,101 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=26954.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:29:30,468 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=26965.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 22:29:41,772 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3567, 5.3480, 4.9049, 5.3671, 5.3488, 4.6763, 5.2375, 4.9311], device='cuda:3'), covar=tensor([0.0379, 0.0408, 0.1293, 0.0594, 0.0434, 0.0412, 0.0342, 0.1053], device='cuda:3'), in_proj_covar=tensor([0.0357, 0.0417, 0.0565, 0.0329, 0.0304, 0.0380, 0.0403, 0.0523], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-08 22:29:58,008 INFO [train.py:898] (3/4) Epoch 8, batch 1550, loss[loss=0.2173, simple_loss=0.3009, pruned_loss=0.06685, over 18214.00 frames. ], tot_loss[loss=0.2086, simple_loss=0.2894, pruned_loss=0.06386, over 3567444.49 frames. ], batch size: 60, lr: 1.43e-02, grad_scale: 8.0 2023-03-08 22:29:58,438 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8946, 4.8645, 5.0082, 4.7042, 4.7154, 4.7633, 5.2219, 5.1043], device='cuda:3'), covar=tensor([0.0077, 0.0075, 0.0060, 0.0090, 0.0069, 0.0096, 0.0077, 0.0103], device='cuda:3'), in_proj_covar=tensor([0.0071, 0.0051, 0.0053, 0.0067, 0.0056, 0.0077, 0.0065, 0.0064], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0002, 0.0002, 0.0003, 0.0002, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-08 22:30:24,926 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=27011.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:30:54,387 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.371e+02 3.938e+02 4.762e+02 5.471e+02 1.136e+03, threshold=9.525e+02, percent-clipped=4.0 2023-03-08 22:30:56,658 INFO [train.py:898] (3/4) Epoch 8, batch 1600, loss[loss=0.2121, simple_loss=0.2904, pruned_loss=0.0669, over 16328.00 frames. ], tot_loss[loss=0.2089, simple_loss=0.2897, pruned_loss=0.06406, over 3563803.82 frames. ], batch size: 94, lr: 1.43e-02, grad_scale: 8.0 2023-03-08 22:31:34,925 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=27070.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:31:56,209 INFO [train.py:898] (3/4) Epoch 8, batch 1650, loss[loss=0.2533, simple_loss=0.3181, pruned_loss=0.0943, over 12770.00 frames. ], tot_loss[loss=0.2077, simple_loss=0.2888, pruned_loss=0.06329, over 3568714.21 frames. ], batch size: 129, lr: 1.42e-02, grad_scale: 8.0 2023-03-08 22:31:58,379 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0957, 5.2568, 2.7764, 5.0311, 4.8858, 5.2863, 5.0546, 2.5266], device='cuda:3'), covar=tensor([0.0151, 0.0059, 0.0688, 0.0061, 0.0067, 0.0069, 0.0112, 0.1017], device='cuda:3'), in_proj_covar=tensor([0.0070, 0.0060, 0.0086, 0.0072, 0.0068, 0.0057, 0.0072, 0.0088], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0005, 0.0004, 0.0003, 0.0003, 0.0004, 0.0005], device='cuda:3') 2023-03-08 22:32:31,985 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=27118.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:32:53,444 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.332e+02 3.429e+02 4.052e+02 5.174e+02 1.566e+03, threshold=8.105e+02, percent-clipped=2.0 2023-03-08 22:32:55,939 INFO [train.py:898] (3/4) Epoch 8, batch 1700, loss[loss=0.2207, simple_loss=0.3035, pruned_loss=0.06892, over 18202.00 frames. ], tot_loss[loss=0.2082, simple_loss=0.2891, pruned_loss=0.06361, over 3580673.53 frames. ], batch size: 60, lr: 1.42e-02, grad_scale: 8.0 2023-03-08 22:33:07,225 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=27148.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:33:46,667 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7263, 3.6278, 5.1752, 4.4139, 3.5097, 3.1652, 4.5109, 5.2170], device='cuda:3'), covar=tensor([0.0730, 0.1550, 0.0049, 0.0257, 0.0692, 0.0949, 0.0265, 0.0119], device='cuda:3'), in_proj_covar=tensor([0.0134, 0.0223, 0.0085, 0.0153, 0.0170, 0.0173, 0.0161, 0.0117], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0001, 0.0002, 0.0003, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-08 22:33:55,119 INFO [train.py:898] (3/4) Epoch 8, batch 1750, loss[loss=0.2037, simple_loss=0.2796, pruned_loss=0.06395, over 18362.00 frames. ], tot_loss[loss=0.2087, simple_loss=0.2896, pruned_loss=0.06386, over 3583882.71 frames. ], batch size: 46, lr: 1.42e-02, grad_scale: 8.0 2023-03-08 22:34:19,887 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=27209.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:34:45,124 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4789, 5.0225, 5.1158, 5.1563, 4.5800, 4.9468, 3.8324, 4.8789], device='cuda:3'), covar=tensor([0.0313, 0.0511, 0.0321, 0.0321, 0.0425, 0.0342, 0.2248, 0.0426], device='cuda:3'), in_proj_covar=tensor([0.0156, 0.0201, 0.0183, 0.0205, 0.0196, 0.0205, 0.0268, 0.0191], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-08 22:34:52,835 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.362e+02 3.433e+02 4.337e+02 5.594e+02 1.098e+03, threshold=8.674e+02, percent-clipped=7.0 2023-03-08 22:34:55,192 INFO [train.py:898] (3/4) Epoch 8, batch 1800, loss[loss=0.2059, simple_loss=0.2884, pruned_loss=0.06172, over 18360.00 frames. ], tot_loss[loss=0.2081, simple_loss=0.2896, pruned_loss=0.06331, over 3586450.91 frames. ], batch size: 55, lr: 1.42e-02, grad_scale: 8.0 2023-03-08 22:35:20,478 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=27260.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 22:35:38,291 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5815, 2.8667, 4.3004, 3.8897, 2.5399, 4.6335, 3.9891, 2.6438], device='cuda:3'), covar=tensor([0.0358, 0.1217, 0.0178, 0.0237, 0.1490, 0.0126, 0.0335, 0.0980], device='cuda:3'), in_proj_covar=tensor([0.0173, 0.0206, 0.0127, 0.0128, 0.0202, 0.0169, 0.0183, 0.0182], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 22:35:54,547 INFO [train.py:898] (3/4) Epoch 8, batch 1850, loss[loss=0.224, simple_loss=0.3133, pruned_loss=0.06734, over 18285.00 frames. ], tot_loss[loss=0.2078, simple_loss=0.2895, pruned_loss=0.0631, over 3597776.26 frames. ], batch size: 57, lr: 1.42e-02, grad_scale: 8.0 2023-03-08 22:36:14,401 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=27306.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:36:22,869 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5969, 3.3687, 1.9251, 4.3511, 3.0116, 4.5442, 2.2890, 4.1839], device='cuda:3'), covar=tensor([0.0460, 0.0734, 0.1342, 0.0439, 0.0803, 0.0156, 0.1062, 0.0246], device='cuda:3'), in_proj_covar=tensor([0.0175, 0.0203, 0.0172, 0.0211, 0.0174, 0.0201, 0.0181, 0.0174], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-08 22:36:46,124 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9045, 3.5553, 5.2738, 3.1435, 4.3258, 2.4392, 2.9894, 1.9291], device='cuda:3'), covar=tensor([0.0764, 0.0802, 0.0047, 0.0496, 0.0464, 0.2205, 0.2087, 0.1639], device='cuda:3'), in_proj_covar=tensor([0.0184, 0.0199, 0.0099, 0.0153, 0.0209, 0.0238, 0.0254, 0.0196], device='cuda:3'), out_proj_covar=tensor([1.6542e-04, 1.8567e-04, 9.2899e-05, 1.4238e-04, 1.9555e-04, 2.2279e-04, 2.3563e-04, 1.8474e-04], device='cuda:3') 2023-03-08 22:36:50,627 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.586e+02 3.923e+02 4.847e+02 5.995e+02 1.610e+03, threshold=9.695e+02, percent-clipped=7.0 2023-03-08 22:36:53,069 INFO [train.py:898] (3/4) Epoch 8, batch 1900, loss[loss=0.2102, simple_loss=0.2982, pruned_loss=0.06106, over 18621.00 frames. ], tot_loss[loss=0.2074, simple_loss=0.289, pruned_loss=0.06287, over 3602872.17 frames. ], batch size: 52, lr: 1.42e-02, grad_scale: 16.0 2023-03-08 22:37:41,105 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8311, 4.8015, 5.0063, 4.6909, 4.6924, 4.7075, 5.1502, 5.0411], device='cuda:3'), covar=tensor([0.0079, 0.0089, 0.0077, 0.0099, 0.0086, 0.0141, 0.0091, 0.0111], device='cuda:3'), in_proj_covar=tensor([0.0072, 0.0052, 0.0054, 0.0069, 0.0057, 0.0078, 0.0066, 0.0065], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0002, 0.0002, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-08 22:37:51,478 INFO [train.py:898] (3/4) Epoch 8, batch 1950, loss[loss=0.2079, simple_loss=0.2914, pruned_loss=0.06224, over 18306.00 frames. ], tot_loss[loss=0.2081, simple_loss=0.2897, pruned_loss=0.06326, over 3601916.01 frames. ], batch size: 54, lr: 1.42e-02, grad_scale: 16.0 2023-03-08 22:38:15,022 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6837, 3.6636, 3.5362, 2.9354, 3.4784, 2.7447, 2.7151, 3.6923], device='cuda:3'), covar=tensor([0.0032, 0.0053, 0.0062, 0.0111, 0.0064, 0.0138, 0.0154, 0.0045], device='cuda:3'), in_proj_covar=tensor([0.0072, 0.0098, 0.0088, 0.0132, 0.0086, 0.0132, 0.0139, 0.0075], device='cuda:3'), out_proj_covar=tensor([9.8328e-05, 1.4439e-04, 1.2883e-04, 2.0337e-04, 1.2600e-04, 2.0156e-04, 2.1107e-04, 1.0734e-04], device='cuda:3') 2023-03-08 22:38:42,587 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8090, 3.7632, 3.6472, 3.2092, 3.5577, 2.8820, 3.0845, 3.7735], device='cuda:3'), covar=tensor([0.0030, 0.0063, 0.0057, 0.0103, 0.0080, 0.0141, 0.0133, 0.0045], device='cuda:3'), in_proj_covar=tensor([0.0072, 0.0097, 0.0088, 0.0131, 0.0086, 0.0132, 0.0138, 0.0074], device='cuda:3'), out_proj_covar=tensor([9.8802e-05, 1.4289e-04, 1.2856e-04, 2.0263e-04, 1.2519e-04, 2.0143e-04, 2.0995e-04, 1.0701e-04], device='cuda:3') 2023-03-08 22:38:47,714 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.257e+02 3.355e+02 4.079e+02 5.085e+02 1.650e+03, threshold=8.157e+02, percent-clipped=2.0 2023-03-08 22:38:49,955 INFO [train.py:898] (3/4) Epoch 8, batch 2000, loss[loss=0.1938, simple_loss=0.2865, pruned_loss=0.05055, over 18578.00 frames. ], tot_loss[loss=0.2081, simple_loss=0.2895, pruned_loss=0.06336, over 3599902.02 frames. ], batch size: 54, lr: 1.42e-02, grad_scale: 16.0 2023-03-08 22:39:12,090 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.99 vs. limit=5.0 2023-03-08 22:39:24,041 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.55 vs. limit=2.0 2023-03-08 22:39:43,826 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8959, 2.4958, 2.1823, 2.3883, 2.9792, 2.7540, 2.6500, 2.5593], device='cuda:3'), covar=tensor([0.0228, 0.0235, 0.0593, 0.0380, 0.0218, 0.0245, 0.0396, 0.0317], device='cuda:3'), in_proj_covar=tensor([0.0107, 0.0091, 0.0141, 0.0123, 0.0092, 0.0073, 0.0119, 0.0117], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 22:39:48,956 INFO [train.py:898] (3/4) Epoch 8, batch 2050, loss[loss=0.2109, simple_loss=0.303, pruned_loss=0.05944, over 17163.00 frames. ], tot_loss[loss=0.2066, simple_loss=0.2881, pruned_loss=0.06261, over 3598046.92 frames. ], batch size: 78, lr: 1.41e-02, grad_scale: 8.0 2023-03-08 22:40:07,044 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=27504.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:40:32,870 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3340, 2.7626, 4.0238, 3.9453, 2.3486, 4.5510, 3.9861, 2.5016], device='cuda:3'), covar=tensor([0.0497, 0.1335, 0.0231, 0.0225, 0.1651, 0.0141, 0.0378, 0.1132], device='cuda:3'), in_proj_covar=tensor([0.0173, 0.0208, 0.0128, 0.0125, 0.0203, 0.0167, 0.0181, 0.0183], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 22:40:46,825 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.547e+02 3.620e+02 4.320e+02 6.156e+02 2.106e+03, threshold=8.640e+02, percent-clipped=12.0 2023-03-08 22:40:47,996 INFO [train.py:898] (3/4) Epoch 8, batch 2100, loss[loss=0.1713, simple_loss=0.2471, pruned_loss=0.0477, over 18172.00 frames. ], tot_loss[loss=0.2068, simple_loss=0.2884, pruned_loss=0.06262, over 3599583.30 frames. ], batch size: 44, lr: 1.41e-02, grad_scale: 8.0 2023-03-08 22:41:12,648 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=27560.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 22:41:47,086 INFO [train.py:898] (3/4) Epoch 8, batch 2150, loss[loss=0.2168, simple_loss=0.3031, pruned_loss=0.06522, over 18234.00 frames. ], tot_loss[loss=0.207, simple_loss=0.2884, pruned_loss=0.06283, over 3593624.82 frames. ], batch size: 60, lr: 1.41e-02, grad_scale: 8.0 2023-03-08 22:42:07,680 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=27606.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:42:09,887 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=27608.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 22:42:45,312 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.528e+02 3.451e+02 4.193e+02 5.044e+02 8.620e+02, threshold=8.386e+02, percent-clipped=0.0 2023-03-08 22:42:46,500 INFO [train.py:898] (3/4) Epoch 8, batch 2200, loss[loss=0.1817, simple_loss=0.2626, pruned_loss=0.05047, over 18498.00 frames. ], tot_loss[loss=0.2062, simple_loss=0.2876, pruned_loss=0.06243, over 3600290.03 frames. ], batch size: 47, lr: 1.41e-02, grad_scale: 8.0 2023-03-08 22:43:03,783 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=27654.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:43:05,670 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5990, 5.1756, 5.0929, 5.0807, 4.7328, 5.0590, 4.3426, 5.0202], device='cuda:3'), covar=tensor([0.0210, 0.0244, 0.0219, 0.0350, 0.0366, 0.0242, 0.1252, 0.0250], device='cuda:3'), in_proj_covar=tensor([0.0155, 0.0198, 0.0188, 0.0208, 0.0197, 0.0208, 0.0272, 0.0189], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-08 22:43:44,309 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8043, 2.9082, 2.6651, 2.8123, 3.6309, 3.6662, 3.2651, 3.1030], device='cuda:3'), covar=tensor([0.0160, 0.0314, 0.0578, 0.0384, 0.0204, 0.0126, 0.0355, 0.0298], device='cuda:3'), in_proj_covar=tensor([0.0109, 0.0093, 0.0143, 0.0124, 0.0092, 0.0072, 0.0121, 0.0118], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 22:43:46,124 INFO [train.py:898] (3/4) Epoch 8, batch 2250, loss[loss=0.188, simple_loss=0.2677, pruned_loss=0.05422, over 18392.00 frames. ], tot_loss[loss=0.2061, simple_loss=0.2876, pruned_loss=0.0623, over 3591492.71 frames. ], batch size: 50, lr: 1.41e-02, grad_scale: 8.0 2023-03-08 22:43:57,743 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6641, 5.1759, 5.1846, 5.1028, 4.8184, 5.0695, 4.4548, 5.0113], device='cuda:3'), covar=tensor([0.0208, 0.0243, 0.0183, 0.0281, 0.0326, 0.0231, 0.1075, 0.0287], device='cuda:3'), in_proj_covar=tensor([0.0153, 0.0196, 0.0185, 0.0205, 0.0195, 0.0206, 0.0267, 0.0187], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-08 22:44:15,600 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=27714.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 22:44:44,345 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.230e+02 4.253e+02 4.744e+02 6.262e+02 1.251e+03, threshold=9.489e+02, percent-clipped=5.0 2023-03-08 22:44:45,486 INFO [train.py:898] (3/4) Epoch 8, batch 2300, loss[loss=0.1926, simple_loss=0.2764, pruned_loss=0.05441, over 18476.00 frames. ], tot_loss[loss=0.2055, simple_loss=0.2873, pruned_loss=0.06188, over 3597496.16 frames. ], batch size: 51, lr: 1.41e-02, grad_scale: 8.0 2023-03-08 22:45:01,306 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.69 vs. limit=2.0 2023-03-08 22:45:28,209 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=27775.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 22:45:44,256 INFO [train.py:898] (3/4) Epoch 8, batch 2350, loss[loss=0.176, simple_loss=0.2644, pruned_loss=0.04376, over 18268.00 frames. ], tot_loss[loss=0.2051, simple_loss=0.2866, pruned_loss=0.06179, over 3598094.14 frames. ], batch size: 49, lr: 1.41e-02, grad_scale: 8.0 2023-03-08 22:46:03,040 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=27804.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:46:42,183 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.997e+02 3.557e+02 4.056e+02 4.769e+02 1.044e+03, threshold=8.112e+02, percent-clipped=2.0 2023-03-08 22:46:43,703 INFO [train.py:898] (3/4) Epoch 8, batch 2400, loss[loss=0.2512, simple_loss=0.333, pruned_loss=0.08467, over 18455.00 frames. ], tot_loss[loss=0.2053, simple_loss=0.2867, pruned_loss=0.06188, over 3593695.99 frames. ], batch size: 59, lr: 1.41e-02, grad_scale: 8.0 2023-03-08 22:46:59,370 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=27852.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:47:42,907 INFO [train.py:898] (3/4) Epoch 8, batch 2450, loss[loss=0.2094, simple_loss=0.2826, pruned_loss=0.06808, over 18495.00 frames. ], tot_loss[loss=0.2051, simple_loss=0.2866, pruned_loss=0.06182, over 3594725.62 frames. ], batch size: 47, lr: 1.40e-02, grad_scale: 8.0 2023-03-08 22:47:44,492 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5297, 2.7896, 2.4249, 2.6207, 3.5916, 3.5037, 3.1103, 2.9132], device='cuda:3'), covar=tensor([0.0188, 0.0266, 0.0646, 0.0435, 0.0166, 0.0128, 0.0264, 0.0305], device='cuda:3'), in_proj_covar=tensor([0.0107, 0.0093, 0.0142, 0.0122, 0.0091, 0.0072, 0.0118, 0.0117], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 22:48:00,339 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1197, 5.1535, 2.6436, 5.0330, 4.8694, 5.2874, 5.0642, 2.4414], device='cuda:3'), covar=tensor([0.0162, 0.0076, 0.0755, 0.0077, 0.0077, 0.0072, 0.0108, 0.1080], device='cuda:3'), in_proj_covar=tensor([0.0070, 0.0060, 0.0085, 0.0074, 0.0069, 0.0057, 0.0071, 0.0087], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0005, 0.0004, 0.0003, 0.0003, 0.0004, 0.0005], device='cuda:3') 2023-03-08 22:48:41,266 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.646e+02 3.668e+02 4.454e+02 5.440e+02 2.886e+03, threshold=8.907e+02, percent-clipped=9.0 2023-03-08 22:48:42,403 INFO [train.py:898] (3/4) Epoch 8, batch 2500, loss[loss=0.212, simple_loss=0.3011, pruned_loss=0.06145, over 18371.00 frames. ], tot_loss[loss=0.2048, simple_loss=0.2861, pruned_loss=0.06173, over 3597090.25 frames. ], batch size: 50, lr: 1.40e-02, grad_scale: 8.0 2023-03-08 22:49:04,793 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-08 22:49:41,157 INFO [train.py:898] (3/4) Epoch 8, batch 2550, loss[loss=0.2035, simple_loss=0.2908, pruned_loss=0.05813, over 18366.00 frames. ], tot_loss[loss=0.2057, simple_loss=0.2871, pruned_loss=0.06214, over 3586719.16 frames. ], batch size: 50, lr: 1.40e-02, grad_scale: 8.0 2023-03-08 22:49:44,891 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7497, 5.2976, 5.3542, 5.2142, 4.8792, 5.2498, 4.4991, 5.1190], device='cuda:3'), covar=tensor([0.0241, 0.0292, 0.0182, 0.0296, 0.0401, 0.0223, 0.1277, 0.0312], device='cuda:3'), in_proj_covar=tensor([0.0156, 0.0200, 0.0186, 0.0208, 0.0198, 0.0209, 0.0271, 0.0190], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-08 22:50:14,776 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.57 vs. limit=2.0 2023-03-08 22:50:45,004 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.293e+02 3.705e+02 4.339e+02 5.125e+02 8.562e+02, threshold=8.678e+02, percent-clipped=0.0 2023-03-08 22:50:45,029 INFO [train.py:898] (3/4) Epoch 8, batch 2600, loss[loss=0.2254, simple_loss=0.306, pruned_loss=0.07236, over 18498.00 frames. ], tot_loss[loss=0.2058, simple_loss=0.2873, pruned_loss=0.06213, over 3579707.77 frames. ], batch size: 53, lr: 1.40e-02, grad_scale: 4.0 2023-03-08 22:51:22,069 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=28070.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 22:51:36,268 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5498, 6.1977, 5.4741, 5.8837, 5.7460, 5.6613, 6.2290, 6.1406], device='cuda:3'), covar=tensor([0.1068, 0.0547, 0.0474, 0.0688, 0.1247, 0.0606, 0.0472, 0.0558], device='cuda:3'), in_proj_covar=tensor([0.0483, 0.0391, 0.0308, 0.0432, 0.0595, 0.0435, 0.0535, 0.0412], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-08 22:51:43,977 INFO [train.py:898] (3/4) Epoch 8, batch 2650, loss[loss=0.1823, simple_loss=0.2633, pruned_loss=0.05066, over 18413.00 frames. ], tot_loss[loss=0.2041, simple_loss=0.2856, pruned_loss=0.06124, over 3586445.79 frames. ], batch size: 48, lr: 1.40e-02, grad_scale: 4.0 2023-03-08 22:51:55,857 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-08 22:52:06,438 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.3804, 3.4732, 4.9221, 4.0985, 3.1381, 3.0053, 4.1866, 4.9605], device='cuda:3'), covar=tensor([0.0788, 0.1271, 0.0060, 0.0309, 0.0786, 0.0965, 0.0308, 0.0129], device='cuda:3'), in_proj_covar=tensor([0.0133, 0.0220, 0.0086, 0.0149, 0.0169, 0.0170, 0.0159, 0.0118], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-08 22:52:42,845 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.340e+02 3.798e+02 4.426e+02 5.240e+02 9.211e+02, threshold=8.852e+02, percent-clipped=2.0 2023-03-08 22:52:42,869 INFO [train.py:898] (3/4) Epoch 8, batch 2700, loss[loss=0.2039, simple_loss=0.2891, pruned_loss=0.05939, over 17788.00 frames. ], tot_loss[loss=0.2032, simple_loss=0.2846, pruned_loss=0.06094, over 3596702.14 frames. ], batch size: 70, lr: 1.40e-02, grad_scale: 4.0 2023-03-08 22:53:03,118 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-08 22:53:41,233 INFO [train.py:898] (3/4) Epoch 8, batch 2750, loss[loss=0.1947, simple_loss=0.2761, pruned_loss=0.05667, over 18543.00 frames. ], tot_loss[loss=0.2034, simple_loss=0.2851, pruned_loss=0.06088, over 3590378.18 frames. ], batch size: 49, lr: 1.40e-02, grad_scale: 4.0 2023-03-08 22:53:49,857 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3993, 2.7356, 2.5166, 2.7383, 3.5322, 3.3414, 2.8667, 2.9357], device='cuda:3'), covar=tensor([0.0183, 0.0251, 0.0635, 0.0357, 0.0172, 0.0197, 0.0397, 0.0308], device='cuda:3'), in_proj_covar=tensor([0.0109, 0.0095, 0.0146, 0.0127, 0.0093, 0.0075, 0.0123, 0.0120], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0001, 0.0003, 0.0002], device='cuda:3') 2023-03-08 22:53:59,107 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3554, 5.0176, 5.4630, 5.3378, 5.1842, 5.9625, 5.5925, 5.2455], device='cuda:3'), covar=tensor([0.0812, 0.0649, 0.0568, 0.0602, 0.1401, 0.0702, 0.0619, 0.1566], device='cuda:3'), in_proj_covar=tensor([0.0287, 0.0217, 0.0226, 0.0223, 0.0267, 0.0328, 0.0214, 0.0315], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 22:54:06,666 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6029, 3.4113, 3.4107, 2.9299, 3.2404, 2.7304, 2.5577, 3.5939], device='cuda:3'), covar=tensor([0.0030, 0.0059, 0.0060, 0.0110, 0.0082, 0.0141, 0.0165, 0.0045], device='cuda:3'), in_proj_covar=tensor([0.0073, 0.0099, 0.0087, 0.0135, 0.0088, 0.0133, 0.0139, 0.0077], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0001, 0.0001, 0.0002, 0.0001, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-08 22:54:07,105 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-08 22:54:29,011 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=28229.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:54:40,875 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.331e+02 3.538e+02 4.348e+02 5.069e+02 1.374e+03, threshold=8.696e+02, percent-clipped=5.0 2023-03-08 22:54:40,901 INFO [train.py:898] (3/4) Epoch 8, batch 2800, loss[loss=0.2146, simple_loss=0.2983, pruned_loss=0.06545, over 18393.00 frames. ], tot_loss[loss=0.2033, simple_loss=0.285, pruned_loss=0.06076, over 3601531.96 frames. ], batch size: 52, lr: 1.40e-02, grad_scale: 8.0 2023-03-08 22:54:59,990 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=28255.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 22:55:07,162 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-08 22:55:26,400 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-08 22:55:37,574 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=28287.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:55:39,228 INFO [train.py:898] (3/4) Epoch 8, batch 2850, loss[loss=0.2182, simple_loss=0.2896, pruned_loss=0.07343, over 18508.00 frames. ], tot_loss[loss=0.2032, simple_loss=0.285, pruned_loss=0.0607, over 3609876.54 frames. ], batch size: 47, lr: 1.39e-02, grad_scale: 8.0 2023-03-08 22:55:40,883 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=28290.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:55:51,013 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8396, 2.0950, 2.9906, 2.8069, 3.7184, 5.2534, 4.7765, 4.5087], device='cuda:3'), covar=tensor([0.0805, 0.1573, 0.1706, 0.1064, 0.1298, 0.0058, 0.0274, 0.0253], device='cuda:3'), in_proj_covar=tensor([0.0210, 0.0267, 0.0270, 0.0232, 0.0342, 0.0158, 0.0233, 0.0181], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-08 22:56:11,820 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=28316.0, num_to_drop=1, layers_to_drop={3} 2023-03-08 22:56:38,154 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.182e+02 3.670e+02 4.425e+02 5.374e+02 1.143e+03, threshold=8.851e+02, percent-clipped=3.0 2023-03-08 22:56:38,179 INFO [train.py:898] (3/4) Epoch 8, batch 2900, loss[loss=0.1782, simple_loss=0.25, pruned_loss=0.05319, over 18456.00 frames. ], tot_loss[loss=0.2034, simple_loss=0.2849, pruned_loss=0.06095, over 3606577.87 frames. ], batch size: 43, lr: 1.39e-02, grad_scale: 8.0 2023-03-08 22:56:49,631 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=28348.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:57:14,647 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=28370.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 22:57:36,804 INFO [train.py:898] (3/4) Epoch 8, batch 2950, loss[loss=0.1794, simple_loss=0.2632, pruned_loss=0.0478, over 18288.00 frames. ], tot_loss[loss=0.2026, simple_loss=0.2842, pruned_loss=0.0605, over 3609640.22 frames. ], batch size: 49, lr: 1.39e-02, grad_scale: 8.0 2023-03-08 22:58:11,172 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=28418.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 22:58:36,003 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.441e+02 3.351e+02 4.042e+02 5.395e+02 3.528e+03, threshold=8.084e+02, percent-clipped=8.0 2023-03-08 22:58:36,028 INFO [train.py:898] (3/4) Epoch 8, batch 3000, loss[loss=0.1888, simple_loss=0.2651, pruned_loss=0.0563, over 18480.00 frames. ], tot_loss[loss=0.202, simple_loss=0.2836, pruned_loss=0.06022, over 3616326.52 frames. ], batch size: 44, lr: 1.39e-02, grad_scale: 8.0 2023-03-08 22:58:36,028 INFO [train.py:923] (3/4) Computing validation loss 2023-03-08 22:58:47,830 INFO [train.py:932] (3/4) Epoch 8, validation: loss=0.165, simple_loss=0.2676, pruned_loss=0.03118, over 944034.00 frames. 2023-03-08 22:58:47,831 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-08 22:59:30,438 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5291, 2.9362, 2.4220, 2.7516, 3.6103, 3.6304, 3.1556, 2.9686], device='cuda:3'), covar=tensor([0.0151, 0.0222, 0.0628, 0.0327, 0.0154, 0.0075, 0.0236, 0.0259], device='cuda:3'), in_proj_covar=tensor([0.0106, 0.0093, 0.0143, 0.0124, 0.0089, 0.0072, 0.0118, 0.0116], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-08 22:59:42,623 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=28486.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 22:59:46,115 INFO [train.py:898] (3/4) Epoch 8, batch 3050, loss[loss=0.2334, simple_loss=0.314, pruned_loss=0.07644, over 17972.00 frames. ], tot_loss[loss=0.2027, simple_loss=0.2844, pruned_loss=0.06045, over 3612633.44 frames. ], batch size: 65, lr: 1.39e-02, grad_scale: 8.0 2023-03-08 22:59:49,963 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=28492.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:00:44,061 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.579e+02 3.667e+02 4.403e+02 5.909e+02 1.221e+03, threshold=8.806e+02, percent-clipped=6.0 2023-03-08 23:00:44,098 INFO [train.py:898] (3/4) Epoch 8, batch 3100, loss[loss=0.2044, simple_loss=0.292, pruned_loss=0.05843, over 17758.00 frames. ], tot_loss[loss=0.2034, simple_loss=0.2849, pruned_loss=0.06098, over 3606705.72 frames. ], batch size: 70, lr: 1.39e-02, grad_scale: 8.0 2023-03-08 23:00:54,271 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=28547.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:01:01,074 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=28553.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:01:04,677 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5479, 1.8724, 2.7326, 2.6486, 3.4644, 5.1889, 4.6573, 4.1585], device='cuda:3'), covar=tensor([0.1016, 0.1773, 0.2000, 0.1186, 0.1495, 0.0068, 0.0291, 0.0356], device='cuda:3'), in_proj_covar=tensor([0.0210, 0.0262, 0.0268, 0.0228, 0.0336, 0.0155, 0.0231, 0.0179], device='cuda:3'), out_proj_covar=tensor([1.3568e-04, 1.6929e-04, 1.7857e-04, 1.3674e-04, 2.1799e-04, 9.9430e-05, 1.4203e-04, 1.1508e-04], device='cuda:3') 2023-03-08 23:01:39,138 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=28585.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:01:39,260 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5510, 5.1157, 5.1454, 5.1126, 4.7364, 5.0621, 4.4053, 4.9762], device='cuda:3'), covar=tensor([0.0234, 0.0326, 0.0196, 0.0286, 0.0365, 0.0242, 0.1159, 0.0297], device='cuda:3'), in_proj_covar=tensor([0.0159, 0.0203, 0.0190, 0.0215, 0.0201, 0.0213, 0.0273, 0.0196], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-08 23:01:43,472 INFO [train.py:898] (3/4) Epoch 8, batch 3150, loss[loss=0.1766, simple_loss=0.2507, pruned_loss=0.05127, over 18457.00 frames. ], tot_loss[loss=0.2023, simple_loss=0.2838, pruned_loss=0.06036, over 3601538.57 frames. ], batch size: 43, lr: 1.39e-02, grad_scale: 8.0 2023-03-08 23:02:10,521 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=28611.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 23:02:37,041 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4950, 1.9229, 2.7995, 2.5952, 3.3022, 4.7692, 4.3761, 4.2171], device='cuda:3'), covar=tensor([0.0882, 0.1569, 0.1635, 0.1045, 0.1252, 0.0085, 0.0287, 0.0241], device='cuda:3'), in_proj_covar=tensor([0.0212, 0.0265, 0.0270, 0.0231, 0.0339, 0.0157, 0.0232, 0.0180], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-08 23:02:43,325 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.977e+02 3.448e+02 4.056e+02 5.168e+02 1.166e+03, threshold=8.112e+02, percent-clipped=4.0 2023-03-08 23:02:43,350 INFO [train.py:898] (3/4) Epoch 8, batch 3200, loss[loss=0.1797, simple_loss=0.2749, pruned_loss=0.04221, over 18318.00 frames. ], tot_loss[loss=0.2028, simple_loss=0.284, pruned_loss=0.06086, over 3593786.84 frames. ], batch size: 54, lr: 1.39e-02, grad_scale: 8.0 2023-03-08 23:02:48,292 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=28643.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:03:42,526 INFO [train.py:898] (3/4) Epoch 8, batch 3250, loss[loss=0.1894, simple_loss=0.2686, pruned_loss=0.05512, over 18258.00 frames. ], tot_loss[loss=0.2035, simple_loss=0.2845, pruned_loss=0.06128, over 3589994.36 frames. ], batch size: 45, lr: 1.39e-02, grad_scale: 4.0 2023-03-08 23:04:36,292 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.58 vs. limit=2.0 2023-03-08 23:04:42,244 INFO [train.py:898] (3/4) Epoch 8, batch 3300, loss[loss=0.2207, simple_loss=0.3136, pruned_loss=0.06385, over 18627.00 frames. ], tot_loss[loss=0.2044, simple_loss=0.2854, pruned_loss=0.0617, over 3589279.88 frames. ], batch size: 52, lr: 1.38e-02, grad_scale: 4.0 2023-03-08 23:04:43,379 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.421e+02 3.488e+02 4.190e+02 5.293e+02 2.938e+03, threshold=8.380e+02, percent-clipped=5.0 2023-03-08 23:05:02,796 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2761, 5.1154, 5.4483, 5.3199, 5.1909, 5.9221, 5.5818, 5.2908], device='cuda:3'), covar=tensor([0.0871, 0.0599, 0.0618, 0.0650, 0.1207, 0.0709, 0.0491, 0.1638], device='cuda:3'), in_proj_covar=tensor([0.0286, 0.0218, 0.0224, 0.0224, 0.0265, 0.0328, 0.0212, 0.0311], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 23:05:36,927 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4980, 3.4581, 1.9772, 4.3364, 2.9926, 4.5253, 2.1803, 4.1105], device='cuda:3'), covar=tensor([0.0703, 0.0817, 0.1571, 0.0521, 0.0914, 0.0261, 0.1390, 0.0331], device='cuda:3'), in_proj_covar=tensor([0.0179, 0.0207, 0.0175, 0.0216, 0.0175, 0.0213, 0.0187, 0.0178], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-08 23:05:41,115 INFO [train.py:898] (3/4) Epoch 8, batch 3350, loss[loss=0.21, simple_loss=0.2858, pruned_loss=0.06707, over 18351.00 frames. ], tot_loss[loss=0.2047, simple_loss=0.286, pruned_loss=0.06165, over 3595390.17 frames. ], batch size: 46, lr: 1.38e-02, grad_scale: 4.0 2023-03-08 23:06:22,359 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.75 vs. limit=5.0 2023-03-08 23:06:40,364 INFO [train.py:898] (3/4) Epoch 8, batch 3400, loss[loss=0.2649, simple_loss=0.3324, pruned_loss=0.09872, over 12619.00 frames. ], tot_loss[loss=0.2057, simple_loss=0.2867, pruned_loss=0.06235, over 3576177.00 frames. ], batch size: 130, lr: 1.38e-02, grad_scale: 4.0 2023-03-08 23:06:40,942 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4618, 1.9242, 2.7184, 2.7087, 3.3341, 5.2200, 4.5465, 4.2403], device='cuda:3'), covar=tensor([0.1053, 0.1820, 0.1911, 0.1155, 0.1600, 0.0057, 0.0369, 0.0330], device='cuda:3'), in_proj_covar=tensor([0.0212, 0.0266, 0.0271, 0.0231, 0.0337, 0.0155, 0.0233, 0.0182], device='cuda:3'), out_proj_covar=tensor([1.3651e-04, 1.7121e-04, 1.8029e-04, 1.3824e-04, 2.1835e-04, 9.9399e-05, 1.4263e-04, 1.1647e-04], device='cuda:3') 2023-03-08 23:06:41,527 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.584e+02 3.993e+02 4.817e+02 5.813e+02 1.322e+03, threshold=9.634e+02, percent-clipped=7.0 2023-03-08 23:06:44,140 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=28842.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:06:50,932 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=28848.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:06:55,637 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3952, 5.0718, 5.4333, 5.3789, 5.3155, 6.0782, 5.7229, 5.4073], device='cuda:3'), covar=tensor([0.0810, 0.0606, 0.0563, 0.0576, 0.1271, 0.0599, 0.0517, 0.1395], device='cuda:3'), in_proj_covar=tensor([0.0282, 0.0214, 0.0221, 0.0222, 0.0263, 0.0321, 0.0209, 0.0307], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 23:06:57,296 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.82 vs. limit=2.0 2023-03-08 23:07:03,155 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8097, 4.5320, 4.7409, 3.4974, 3.8034, 3.6784, 2.7169, 2.3802], device='cuda:3'), covar=tensor([0.0206, 0.0181, 0.0047, 0.0234, 0.0291, 0.0236, 0.0733, 0.0903], device='cuda:3'), in_proj_covar=tensor([0.0052, 0.0044, 0.0043, 0.0056, 0.0075, 0.0052, 0.0069, 0.0075], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0005, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-08 23:07:07,310 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=28861.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:07:35,101 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=28885.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:07:39,397 INFO [train.py:898] (3/4) Epoch 8, batch 3450, loss[loss=0.2049, simple_loss=0.291, pruned_loss=0.05935, over 18293.00 frames. ], tot_loss[loss=0.2037, simple_loss=0.2844, pruned_loss=0.06147, over 3585087.01 frames. ], batch size: 54, lr: 1.38e-02, grad_scale: 4.0 2023-03-08 23:08:06,150 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=28911.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 23:08:18,930 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=28922.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:08:31,113 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=28933.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:08:39,009 INFO [train.py:898] (3/4) Epoch 8, batch 3500, loss[loss=0.1792, simple_loss=0.2575, pruned_loss=0.05043, over 18163.00 frames. ], tot_loss[loss=0.2036, simple_loss=0.2848, pruned_loss=0.06125, over 3585975.14 frames. ], batch size: 44, lr: 1.38e-02, grad_scale: 4.0 2023-03-08 23:08:40,153 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.088e+02 3.743e+02 4.474e+02 5.691e+02 1.966e+03, threshold=8.949e+02, percent-clipped=4.0 2023-03-08 23:08:43,954 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=28943.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:08:56,939 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5225, 1.9473, 2.5533, 2.6562, 3.2374, 4.6110, 4.1552, 3.9695], device='cuda:3'), covar=tensor([0.0878, 0.1605, 0.1902, 0.1100, 0.1422, 0.0117, 0.0366, 0.0295], device='cuda:3'), in_proj_covar=tensor([0.0213, 0.0267, 0.0274, 0.0233, 0.0341, 0.0158, 0.0234, 0.0183], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-08 23:09:02,278 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=28959.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 23:09:24,401 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.47 vs. limit=2.0 2023-03-08 23:09:35,651 INFO [train.py:898] (3/4) Epoch 8, batch 3550, loss[loss=0.1747, simple_loss=0.2531, pruned_loss=0.04819, over 18361.00 frames. ], tot_loss[loss=0.2037, simple_loss=0.2852, pruned_loss=0.06112, over 3584357.62 frames. ], batch size: 42, lr: 1.38e-02, grad_scale: 4.0 2023-03-08 23:09:37,918 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=28991.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:09:50,592 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=29002.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:10:30,399 INFO [train.py:898] (3/4) Epoch 8, batch 3600, loss[loss=0.2089, simple_loss=0.2961, pruned_loss=0.06088, over 17850.00 frames. ], tot_loss[loss=0.2032, simple_loss=0.2849, pruned_loss=0.06071, over 3594965.84 frames. ], batch size: 70, lr: 1.38e-02, grad_scale: 8.0 2023-03-08 23:10:31,432 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.295e+02 3.435e+02 4.194e+02 5.068e+02 1.055e+03, threshold=8.389e+02, percent-clipped=1.0 2023-03-08 23:10:56,633 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0986, 4.5449, 4.1379, 4.3199, 4.2007, 4.7661, 4.3917, 4.2752], device='cuda:3'), covar=tensor([0.1379, 0.1110, 0.0939, 0.0793, 0.1475, 0.1125, 0.0875, 0.1574], device='cuda:3'), in_proj_covar=tensor([0.0281, 0.0214, 0.0219, 0.0220, 0.0262, 0.0322, 0.0209, 0.0305], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 23:10:56,815 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=29063.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:11:37,016 INFO [train.py:898] (3/4) Epoch 9, batch 0, loss[loss=0.2126, simple_loss=0.2937, pruned_loss=0.0657, over 17942.00 frames. ], tot_loss[loss=0.2126, simple_loss=0.2937, pruned_loss=0.0657, over 17942.00 frames. ], batch size: 65, lr: 1.30e-02, grad_scale: 8.0 2023-03-08 23:11:37,016 INFO [train.py:923] (3/4) Computing validation loss 2023-03-08 23:11:48,955 INFO [train.py:932] (3/4) Epoch 9, validation: loss=0.1674, simple_loss=0.2698, pruned_loss=0.03254, over 944034.00 frames. 2023-03-08 23:11:48,956 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-08 23:12:01,506 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0209, 4.1143, 5.2359, 2.8463, 4.1914, 2.6232, 3.0892, 2.1310], device='cuda:3'), covar=tensor([0.0782, 0.0555, 0.0047, 0.0613, 0.0543, 0.1962, 0.2105, 0.1524], device='cuda:3'), in_proj_covar=tensor([0.0184, 0.0197, 0.0098, 0.0154, 0.0211, 0.0232, 0.0253, 0.0196], device='cuda:3'), out_proj_covar=tensor([1.6442e-04, 1.8209e-04, 9.2276e-05, 1.4066e-04, 1.9555e-04, 2.1618e-04, 2.3282e-04, 1.8369e-04], device='cuda:3') 2023-03-08 23:12:48,241 INFO [train.py:898] (3/4) Epoch 9, batch 50, loss[loss=0.177, simple_loss=0.2591, pruned_loss=0.04746, over 18279.00 frames. ], tot_loss[loss=0.2011, simple_loss=0.2844, pruned_loss=0.05887, over 811441.06 frames. ], batch size: 47, lr: 1.30e-02, grad_scale: 8.0 2023-03-08 23:13:08,326 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.388e+02 3.677e+02 4.281e+02 4.949e+02 1.360e+03, threshold=8.563e+02, percent-clipped=6.0 2023-03-08 23:13:10,823 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=29142.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:13:18,828 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=29148.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:13:47,196 INFO [train.py:898] (3/4) Epoch 9, batch 100, loss[loss=0.2199, simple_loss=0.3003, pruned_loss=0.06979, over 16264.00 frames. ], tot_loss[loss=0.2014, simple_loss=0.2838, pruned_loss=0.05948, over 1433228.52 frames. ], batch size: 94, lr: 1.30e-02, grad_scale: 8.0 2023-03-08 23:14:07,667 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=29190.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:14:14,369 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=29196.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:14:39,359 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=29217.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:14:46,035 INFO [train.py:898] (3/4) Epoch 9, batch 150, loss[loss=0.2121, simple_loss=0.2879, pruned_loss=0.06812, over 18425.00 frames. ], tot_loss[loss=0.2012, simple_loss=0.283, pruned_loss=0.0597, over 1909632.18 frames. ], batch size: 48, lr: 1.30e-02, grad_scale: 8.0 2023-03-08 23:15:00,907 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=29236.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:15:05,098 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.617e+02 3.595e+02 4.206e+02 4.990e+02 1.116e+03, threshold=8.412e+02, percent-clipped=1.0 2023-03-08 23:15:23,428 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.50 vs. limit=5.0 2023-03-08 23:15:44,240 INFO [train.py:898] (3/4) Epoch 9, batch 200, loss[loss=0.1835, simple_loss=0.2655, pruned_loss=0.05076, over 18543.00 frames. ], tot_loss[loss=0.2022, simple_loss=0.2842, pruned_loss=0.06011, over 2281338.05 frames. ], batch size: 49, lr: 1.30e-02, grad_scale: 8.0 2023-03-08 23:15:45,911 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2988, 4.2430, 5.3989, 3.1371, 4.4962, 2.8721, 3.1176, 2.3885], device='cuda:3'), covar=tensor([0.0675, 0.0544, 0.0048, 0.0554, 0.0487, 0.1928, 0.2212, 0.1416], device='cuda:3'), in_proj_covar=tensor([0.0183, 0.0198, 0.0100, 0.0156, 0.0211, 0.0233, 0.0256, 0.0196], device='cuda:3'), out_proj_covar=tensor([1.6398e-04, 1.8226e-04, 9.3622e-05, 1.4233e-04, 1.9581e-04, 2.1621e-04, 2.3440e-04, 1.8336e-04], device='cuda:3') 2023-03-08 23:16:13,786 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=29297.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:16:44,289 INFO [train.py:898] (3/4) Epoch 9, batch 250, loss[loss=0.1781, simple_loss=0.2596, pruned_loss=0.04827, over 18245.00 frames. ], tot_loss[loss=0.2012, simple_loss=0.2833, pruned_loss=0.05951, over 2576118.52 frames. ], batch size: 45, lr: 1.30e-02, grad_scale: 8.0 2023-03-08 23:17:03,791 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.373e+02 3.539e+02 4.403e+02 5.304e+02 1.212e+03, threshold=8.805e+02, percent-clipped=3.0 2023-03-08 23:17:25,596 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=29358.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:17:43,324 INFO [train.py:898] (3/4) Epoch 9, batch 300, loss[loss=0.1934, simple_loss=0.2797, pruned_loss=0.05352, over 18349.00 frames. ], tot_loss[loss=0.202, simple_loss=0.2841, pruned_loss=0.05992, over 2798640.66 frames. ], batch size: 55, lr: 1.30e-02, grad_scale: 8.0 2023-03-08 23:18:12,504 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.41 vs. limit=5.0 2023-03-08 23:18:42,970 INFO [train.py:898] (3/4) Epoch 9, batch 350, loss[loss=0.1776, simple_loss=0.2501, pruned_loss=0.05253, over 18452.00 frames. ], tot_loss[loss=0.2018, simple_loss=0.2839, pruned_loss=0.05986, over 2971466.69 frames. ], batch size: 43, lr: 1.30e-02, grad_scale: 8.0 2023-03-08 23:19:02,289 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.167e+02 3.494e+02 4.009e+02 5.079e+02 1.236e+03, threshold=8.018e+02, percent-clipped=2.0 2023-03-08 23:19:41,980 INFO [train.py:898] (3/4) Epoch 9, batch 400, loss[loss=0.1634, simple_loss=0.2427, pruned_loss=0.04201, over 18508.00 frames. ], tot_loss[loss=0.2019, simple_loss=0.2842, pruned_loss=0.05979, over 3095316.48 frames. ], batch size: 47, lr: 1.29e-02, grad_scale: 8.0 2023-03-08 23:20:34,138 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=29517.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:20:40,637 INFO [train.py:898] (3/4) Epoch 9, batch 450, loss[loss=0.2107, simple_loss=0.2957, pruned_loss=0.06288, over 16949.00 frames. ], tot_loss[loss=0.2014, simple_loss=0.2838, pruned_loss=0.05955, over 3210132.56 frames. ], batch size: 78, lr: 1.29e-02, grad_scale: 8.0 2023-03-08 23:20:59,762 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.081e+02 3.569e+02 4.141e+02 5.253e+02 9.990e+02, threshold=8.283e+02, percent-clipped=5.0 2023-03-08 23:21:07,161 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5747, 3.4032, 1.8147, 4.2811, 2.9811, 4.4478, 2.2620, 3.9908], device='cuda:3'), covar=tensor([0.0537, 0.0786, 0.1498, 0.0367, 0.0849, 0.0244, 0.1168, 0.0350], device='cuda:3'), in_proj_covar=tensor([0.0178, 0.0204, 0.0174, 0.0215, 0.0174, 0.0214, 0.0186, 0.0176], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-08 23:21:12,932 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=29551.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:21:30,030 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=29565.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:21:40,047 INFO [train.py:898] (3/4) Epoch 9, batch 500, loss[loss=0.1704, simple_loss=0.2432, pruned_loss=0.04886, over 18478.00 frames. ], tot_loss[loss=0.2015, simple_loss=0.2839, pruned_loss=0.05956, over 3295595.78 frames. ], batch size: 44, lr: 1.29e-02, grad_scale: 8.0 2023-03-08 23:21:47,426 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.81 vs. limit=5.0 2023-03-08 23:22:01,728 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=29592.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:22:06,978 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.23 vs. limit=5.0 2023-03-08 23:22:25,333 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=29612.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:22:38,319 INFO [train.py:898] (3/4) Epoch 9, batch 550, loss[loss=0.2499, simple_loss=0.3165, pruned_loss=0.09162, over 12149.00 frames. ], tot_loss[loss=0.2023, simple_loss=0.2841, pruned_loss=0.06027, over 3348412.03 frames. ], batch size: 129, lr: 1.29e-02, grad_scale: 8.0 2023-03-08 23:22:58,652 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.384e+02 3.638e+02 4.636e+02 5.710e+02 1.392e+03, threshold=9.272e+02, percent-clipped=6.0 2023-03-08 23:23:19,539 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=29658.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:23:24,150 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=29662.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:23:37,408 INFO [train.py:898] (3/4) Epoch 9, batch 600, loss[loss=0.174, simple_loss=0.2535, pruned_loss=0.04727, over 18495.00 frames. ], tot_loss[loss=0.2014, simple_loss=0.2834, pruned_loss=0.05967, over 3412336.41 frames. ], batch size: 47, lr: 1.29e-02, grad_scale: 8.0 2023-03-08 23:23:45,827 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.46 vs. limit=2.0 2023-03-08 23:24:01,950 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6010, 3.2735, 1.9378, 4.2817, 2.9202, 4.3815, 2.3056, 3.9668], device='cuda:3'), covar=tensor([0.0564, 0.0900, 0.1582, 0.0399, 0.0901, 0.0248, 0.1239, 0.0344], device='cuda:3'), in_proj_covar=tensor([0.0175, 0.0202, 0.0171, 0.0211, 0.0170, 0.0210, 0.0183, 0.0173], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-08 23:24:06,527 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8140, 3.7380, 3.6325, 3.1606, 3.4478, 2.9428, 2.9098, 3.7285], device='cuda:3'), covar=tensor([0.0029, 0.0066, 0.0056, 0.0113, 0.0071, 0.0134, 0.0159, 0.0049], device='cuda:3'), in_proj_covar=tensor([0.0076, 0.0102, 0.0091, 0.0140, 0.0092, 0.0137, 0.0145, 0.0079], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0001, 0.0001, 0.0002, 0.0001, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-08 23:24:16,722 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=29706.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:24:36,208 INFO [train.py:898] (3/4) Epoch 9, batch 650, loss[loss=0.2297, simple_loss=0.3171, pruned_loss=0.07112, over 18355.00 frames. ], tot_loss[loss=0.2006, simple_loss=0.2823, pruned_loss=0.0594, over 3452139.46 frames. ], batch size: 55, lr: 1.29e-02, grad_scale: 8.0 2023-03-08 23:24:36,686 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=29723.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 23:24:57,201 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.211e+02 3.283e+02 4.124e+02 5.101e+02 2.453e+03, threshold=8.247e+02, percent-clipped=7.0 2023-03-08 23:25:35,096 INFO [train.py:898] (3/4) Epoch 9, batch 700, loss[loss=0.2065, simple_loss=0.2792, pruned_loss=0.06687, over 18501.00 frames. ], tot_loss[loss=0.2, simple_loss=0.2817, pruned_loss=0.05916, over 3491380.83 frames. ], batch size: 44, lr: 1.29e-02, grad_scale: 8.0 2023-03-08 23:25:43,445 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.49 vs. limit=2.0 2023-03-08 23:26:10,226 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=29802.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:26:34,254 INFO [train.py:898] (3/4) Epoch 9, batch 750, loss[loss=0.2169, simple_loss=0.3008, pruned_loss=0.06651, over 17959.00 frames. ], tot_loss[loss=0.1998, simple_loss=0.2816, pruned_loss=0.05897, over 3510925.27 frames. ], batch size: 65, lr: 1.29e-02, grad_scale: 8.0 2023-03-08 23:26:55,140 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.192e+02 3.216e+02 4.021e+02 4.703e+02 9.794e+02, threshold=8.041e+02, percent-clipped=2.0 2023-03-08 23:27:21,817 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=29863.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:27:28,806 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5681, 3.3721, 1.9415, 4.2162, 3.0480, 4.2932, 2.1865, 3.8630], device='cuda:3'), covar=tensor([0.0501, 0.0763, 0.1482, 0.0400, 0.0756, 0.0231, 0.1176, 0.0343], device='cuda:3'), in_proj_covar=tensor([0.0178, 0.0203, 0.0172, 0.0211, 0.0171, 0.0212, 0.0183, 0.0174], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-08 23:27:31,107 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7812, 3.8041, 3.5920, 3.1841, 3.5849, 2.7376, 2.8764, 3.8101], device='cuda:3'), covar=tensor([0.0032, 0.0063, 0.0056, 0.0117, 0.0055, 0.0154, 0.0152, 0.0048], device='cuda:3'), in_proj_covar=tensor([0.0076, 0.0102, 0.0091, 0.0138, 0.0092, 0.0138, 0.0145, 0.0078], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0001, 0.0001, 0.0002, 0.0001, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-08 23:27:32,994 INFO [train.py:898] (3/4) Epoch 9, batch 800, loss[loss=0.2101, simple_loss=0.2987, pruned_loss=0.06077, over 18609.00 frames. ], tot_loss[loss=0.2013, simple_loss=0.2831, pruned_loss=0.05969, over 3523590.43 frames. ], batch size: 52, lr: 1.29e-02, grad_scale: 8.0 2023-03-08 23:27:38,638 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.34 vs. limit=5.0 2023-03-08 23:27:56,524 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=29892.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:28:13,996 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=29907.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:28:16,240 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5158, 6.1393, 5.5761, 5.8827, 5.5946, 5.5792, 6.1655, 6.1031], device='cuda:3'), covar=tensor([0.1065, 0.0553, 0.0413, 0.0581, 0.1456, 0.0683, 0.0506, 0.0623], device='cuda:3'), in_proj_covar=tensor([0.0484, 0.0387, 0.0301, 0.0430, 0.0598, 0.0430, 0.0550, 0.0423], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-08 23:28:32,030 INFO [train.py:898] (3/4) Epoch 9, batch 850, loss[loss=0.1995, simple_loss=0.2859, pruned_loss=0.05656, over 18501.00 frames. ], tot_loss[loss=0.2011, simple_loss=0.2828, pruned_loss=0.05965, over 3541975.38 frames. ], batch size: 53, lr: 1.29e-02, grad_scale: 8.0 2023-03-08 23:28:52,834 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.076e+02 3.769e+02 4.369e+02 5.199e+02 8.545e+02, threshold=8.737e+02, percent-clipped=2.0 2023-03-08 23:28:53,054 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=29940.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:29:31,701 INFO [train.py:898] (3/4) Epoch 9, batch 900, loss[loss=0.1906, simple_loss=0.271, pruned_loss=0.05513, over 18493.00 frames. ], tot_loss[loss=0.2005, simple_loss=0.2827, pruned_loss=0.0591, over 3552297.41 frames. ], batch size: 47, lr: 1.28e-02, grad_scale: 8.0 2023-03-08 23:30:30,142 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=30018.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 23:30:35,482 INFO [train.py:898] (3/4) Epoch 9, batch 950, loss[loss=0.1953, simple_loss=0.2765, pruned_loss=0.05701, over 18555.00 frames. ], tot_loss[loss=0.1997, simple_loss=0.282, pruned_loss=0.05872, over 3558275.79 frames. ], batch size: 49, lr: 1.28e-02, grad_scale: 8.0 2023-03-08 23:30:56,115 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.525e+02 3.421e+02 3.887e+02 4.758e+02 7.533e+02, threshold=7.773e+02, percent-clipped=0.0 2023-03-08 23:31:02,733 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.81 vs. limit=2.0 2023-03-08 23:31:34,974 INFO [train.py:898] (3/4) Epoch 9, batch 1000, loss[loss=0.1653, simple_loss=0.2434, pruned_loss=0.04362, over 18424.00 frames. ], tot_loss[loss=0.2006, simple_loss=0.2827, pruned_loss=0.05923, over 3569186.46 frames. ], batch size: 42, lr: 1.28e-02, grad_scale: 8.0 2023-03-08 23:32:08,641 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.72 vs. limit=5.0 2023-03-08 23:32:16,246 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8371, 4.5658, 4.7833, 3.4775, 3.8111, 3.5390, 2.6026, 2.1255], device='cuda:3'), covar=tensor([0.0255, 0.0166, 0.0063, 0.0255, 0.0348, 0.0189, 0.0762, 0.0984], device='cuda:3'), in_proj_covar=tensor([0.0053, 0.0044, 0.0043, 0.0056, 0.0077, 0.0051, 0.0070, 0.0076], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-08 23:32:34,157 INFO [train.py:898] (3/4) Epoch 9, batch 1050, loss[loss=0.1978, simple_loss=0.277, pruned_loss=0.05926, over 18417.00 frames. ], tot_loss[loss=0.2, simple_loss=0.2821, pruned_loss=0.0589, over 3572217.59 frames. ], batch size: 48, lr: 1.28e-02, grad_scale: 8.0 2023-03-08 23:32:36,810 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30125.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:32:53,969 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.406e+02 3.588e+02 4.236e+02 5.310e+02 1.075e+03, threshold=8.473e+02, percent-clipped=3.0 2023-03-08 23:33:15,611 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=30158.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:33:32,455 INFO [train.py:898] (3/4) Epoch 9, batch 1100, loss[loss=0.2021, simple_loss=0.2814, pruned_loss=0.06139, over 17100.00 frames. ], tot_loss[loss=0.2005, simple_loss=0.2827, pruned_loss=0.05921, over 3577160.61 frames. ], batch size: 78, lr: 1.28e-02, grad_scale: 8.0 2023-03-08 23:33:47,876 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30186.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:34:12,778 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=30207.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:34:15,624 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30209.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:34:16,686 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4623, 5.1492, 5.6685, 5.4350, 5.3718, 6.2054, 5.7150, 5.4644], device='cuda:3'), covar=tensor([0.0893, 0.0619, 0.0534, 0.0661, 0.1304, 0.0685, 0.0508, 0.1640], device='cuda:3'), in_proj_covar=tensor([0.0288, 0.0217, 0.0229, 0.0229, 0.0270, 0.0328, 0.0213, 0.0318], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-08 23:34:31,342 INFO [train.py:898] (3/4) Epoch 9, batch 1150, loss[loss=0.2126, simple_loss=0.299, pruned_loss=0.06315, over 17868.00 frames. ], tot_loss[loss=0.2009, simple_loss=0.2832, pruned_loss=0.05932, over 3571433.43 frames. ], batch size: 70, lr: 1.28e-02, grad_scale: 8.0 2023-03-08 23:34:50,754 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.527e+02 3.691e+02 4.407e+02 5.401e+02 1.436e+03, threshold=8.814e+02, percent-clipped=4.0 2023-03-08 23:35:09,081 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=30255.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:35:27,518 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30270.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:35:30,584 INFO [train.py:898] (3/4) Epoch 9, batch 1200, loss[loss=0.1878, simple_loss=0.2607, pruned_loss=0.05745, over 18489.00 frames. ], tot_loss[loss=0.2004, simple_loss=0.2825, pruned_loss=0.0591, over 3586284.79 frames. ], batch size: 44, lr: 1.28e-02, grad_scale: 8.0 2023-03-08 23:35:40,323 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3697, 4.3517, 2.5440, 4.5170, 5.4306, 2.4707, 4.0213, 4.1273], device='cuda:3'), covar=tensor([0.0062, 0.1031, 0.1456, 0.0424, 0.0037, 0.1281, 0.0532, 0.0613], device='cuda:3'), in_proj_covar=tensor([0.0100, 0.0215, 0.0185, 0.0183, 0.0083, 0.0170, 0.0197, 0.0195], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-08 23:36:24,821 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=30318.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 23:36:30,184 INFO [train.py:898] (3/4) Epoch 9, batch 1250, loss[loss=0.1908, simple_loss=0.2789, pruned_loss=0.05136, over 18484.00 frames. ], tot_loss[loss=0.2001, simple_loss=0.2821, pruned_loss=0.05902, over 3579156.08 frames. ], batch size: 53, lr: 1.28e-02, grad_scale: 8.0 2023-03-08 23:36:49,689 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.180e+02 3.477e+02 4.151e+02 5.169e+02 1.110e+03, threshold=8.302e+02, percent-clipped=2.0 2023-03-08 23:37:14,334 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30360.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:37:22,273 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=30366.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:37:29,898 INFO [train.py:898] (3/4) Epoch 9, batch 1300, loss[loss=0.1955, simple_loss=0.2824, pruned_loss=0.05426, over 18329.00 frames. ], tot_loss[loss=0.1998, simple_loss=0.2818, pruned_loss=0.05891, over 3585441.69 frames. ], batch size: 54, lr: 1.28e-02, grad_scale: 8.0 2023-03-08 23:38:20,390 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0804, 4.1928, 2.3759, 4.1973, 5.1799, 2.5568, 3.7007, 3.7828], device='cuda:3'), covar=tensor([0.0083, 0.1020, 0.1613, 0.0480, 0.0048, 0.1236, 0.0640, 0.0680], device='cuda:3'), in_proj_covar=tensor([0.0101, 0.0213, 0.0183, 0.0183, 0.0083, 0.0170, 0.0197, 0.0195], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0003, 0.0002, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-08 23:38:27,135 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30421.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:38:28,924 INFO [train.py:898] (3/4) Epoch 9, batch 1350, loss[loss=0.1884, simple_loss=0.2729, pruned_loss=0.05197, over 18255.00 frames. ], tot_loss[loss=0.1998, simple_loss=0.2819, pruned_loss=0.05886, over 3589117.96 frames. ], batch size: 47, lr: 1.27e-02, grad_scale: 8.0 2023-03-08 23:38:48,275 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.096e+02 3.366e+02 4.141e+02 5.097e+02 1.343e+03, threshold=8.282e+02, percent-clipped=5.0 2023-03-08 23:39:09,483 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=30458.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:39:27,632 INFO [train.py:898] (3/4) Epoch 9, batch 1400, loss[loss=0.176, simple_loss=0.2564, pruned_loss=0.04776, over 18355.00 frames. ], tot_loss[loss=0.1993, simple_loss=0.2813, pruned_loss=0.05861, over 3599339.53 frames. ], batch size: 46, lr: 1.27e-02, grad_scale: 8.0 2023-03-08 23:39:37,294 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=30481.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:40:06,881 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=30506.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:40:26,572 INFO [train.py:898] (3/4) Epoch 9, batch 1450, loss[loss=0.1992, simple_loss=0.2843, pruned_loss=0.05703, over 18573.00 frames. ], tot_loss[loss=0.199, simple_loss=0.2809, pruned_loss=0.05849, over 3599307.46 frames. ], batch size: 54, lr: 1.27e-02, grad_scale: 8.0 2023-03-08 23:40:46,480 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.569e+02 3.548e+02 4.179e+02 5.117e+02 1.123e+03, threshold=8.357e+02, percent-clipped=2.0 2023-03-08 23:40:57,056 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4237, 2.5916, 3.9048, 3.7376, 2.7581, 4.4071, 4.0139, 2.7034], device='cuda:3'), covar=tensor([0.0444, 0.1415, 0.0251, 0.0270, 0.1396, 0.0153, 0.0336, 0.1050], device='cuda:3'), in_proj_covar=tensor([0.0177, 0.0209, 0.0137, 0.0133, 0.0204, 0.0171, 0.0193, 0.0187], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 23:41:16,084 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=30565.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:41:24,910 INFO [train.py:898] (3/4) Epoch 9, batch 1500, loss[loss=0.1695, simple_loss=0.2505, pruned_loss=0.04421, over 18497.00 frames. ], tot_loss[loss=0.1987, simple_loss=0.2809, pruned_loss=0.05826, over 3600345.88 frames. ], batch size: 44, lr: 1.27e-02, grad_scale: 8.0 2023-03-08 23:42:24,047 INFO [train.py:898] (3/4) Epoch 9, batch 1550, loss[loss=0.1849, simple_loss=0.2678, pruned_loss=0.05107, over 18406.00 frames. ], tot_loss[loss=0.1989, simple_loss=0.2811, pruned_loss=0.0583, over 3589854.63 frames. ], batch size: 48, lr: 1.27e-02, grad_scale: 8.0 2023-03-08 23:42:44,355 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.188e+02 3.439e+02 3.957e+02 5.356e+02 1.144e+03, threshold=7.914e+02, percent-clipped=4.0 2023-03-08 23:43:01,922 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30655.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:43:23,619 INFO [train.py:898] (3/4) Epoch 9, batch 1600, loss[loss=0.1914, simple_loss=0.2667, pruned_loss=0.05809, over 18356.00 frames. ], tot_loss[loss=0.1993, simple_loss=0.2817, pruned_loss=0.05847, over 3584814.86 frames. ], batch size: 46, lr: 1.27e-02, grad_scale: 8.0 2023-03-08 23:44:15,196 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=30716.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:44:15,441 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30716.0, num_to_drop=1, layers_to_drop={1} 2023-03-08 23:44:22,846 INFO [train.py:898] (3/4) Epoch 9, batch 1650, loss[loss=0.2052, simple_loss=0.2972, pruned_loss=0.05655, over 18476.00 frames. ], tot_loss[loss=0.1984, simple_loss=0.2812, pruned_loss=0.05786, over 3592560.66 frames. ], batch size: 53, lr: 1.27e-02, grad_scale: 16.0 2023-03-08 23:44:35,285 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5032, 3.4477, 1.8517, 4.2852, 2.9717, 4.0940, 2.1384, 3.7122], device='cuda:3'), covar=tensor([0.0470, 0.0651, 0.1526, 0.0348, 0.0795, 0.0335, 0.1264, 0.0404], device='cuda:3'), in_proj_covar=tensor([0.0178, 0.0203, 0.0174, 0.0214, 0.0172, 0.0214, 0.0183, 0.0175], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-08 23:44:41,442 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.3005, 4.3755, 4.4169, 4.1449, 4.0793, 4.2158, 4.5256, 4.5164], device='cuda:3'), covar=tensor([0.0067, 0.0066, 0.0050, 0.0087, 0.0071, 0.0115, 0.0070, 0.0083], device='cuda:3'), in_proj_covar=tensor([0.0071, 0.0052, 0.0054, 0.0067, 0.0058, 0.0078, 0.0065, 0.0065], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-08 23:44:43,337 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.525e+02 3.575e+02 4.515e+02 5.590e+02 1.202e+03, threshold=9.030e+02, percent-clipped=6.0 2023-03-08 23:45:22,490 INFO [train.py:898] (3/4) Epoch 9, batch 1700, loss[loss=0.1901, simple_loss=0.2652, pruned_loss=0.05754, over 18267.00 frames. ], tot_loss[loss=0.1986, simple_loss=0.2812, pruned_loss=0.05801, over 3591815.64 frames. ], batch size: 47, lr: 1.27e-02, grad_scale: 16.0 2023-03-08 23:45:32,823 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=30781.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:46:22,136 INFO [train.py:898] (3/4) Epoch 9, batch 1750, loss[loss=0.1923, simple_loss=0.2734, pruned_loss=0.05559, over 18271.00 frames. ], tot_loss[loss=0.1995, simple_loss=0.2818, pruned_loss=0.05855, over 3591838.07 frames. ], batch size: 47, lr: 1.27e-02, grad_scale: 16.0 2023-03-08 23:46:29,122 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=30829.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:46:43,007 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.320e+02 3.498e+02 4.092e+02 4.863e+02 1.038e+03, threshold=8.183e+02, percent-clipped=2.0 2023-03-08 23:47:11,725 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=30865.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:47:13,417 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.79 vs. limit=2.0 2023-03-08 23:47:20,586 INFO [train.py:898] (3/4) Epoch 9, batch 1800, loss[loss=0.1991, simple_loss=0.282, pruned_loss=0.05817, over 17078.00 frames. ], tot_loss[loss=0.1992, simple_loss=0.2818, pruned_loss=0.05834, over 3580972.86 frames. ], batch size: 78, lr: 1.27e-02, grad_scale: 8.0 2023-03-08 23:47:26,928 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-08 23:47:57,637 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30903.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:48:09,037 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=30913.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:48:20,071 INFO [train.py:898] (3/4) Epoch 9, batch 1850, loss[loss=0.2083, simple_loss=0.2955, pruned_loss=0.06055, over 18101.00 frames. ], tot_loss[loss=0.1982, simple_loss=0.2807, pruned_loss=0.05781, over 3582195.12 frames. ], batch size: 62, lr: 1.26e-02, grad_scale: 8.0 2023-03-08 23:48:22,741 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30925.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:48:25,824 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6188, 3.5329, 2.0462, 4.4925, 3.0790, 4.6208, 2.1773, 4.0241], device='cuda:3'), covar=tensor([0.0532, 0.0850, 0.1460, 0.0416, 0.0846, 0.0239, 0.1281, 0.0346], device='cuda:3'), in_proj_covar=tensor([0.0180, 0.0204, 0.0176, 0.0217, 0.0175, 0.0218, 0.0187, 0.0176], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-08 23:48:32,755 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6932, 5.7578, 5.2738, 5.4602, 4.8259, 5.5139, 5.7618, 5.6739], device='cuda:3'), covar=tensor([0.2696, 0.0943, 0.0816, 0.1256, 0.3025, 0.1027, 0.0969, 0.1007], device='cuda:3'), in_proj_covar=tensor([0.0501, 0.0407, 0.0305, 0.0448, 0.0613, 0.0442, 0.0563, 0.0437], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-08 23:48:34,025 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30934.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:48:42,059 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.240e+02 3.365e+02 3.795e+02 4.610e+02 7.960e+02, threshold=7.590e+02, percent-clipped=0.0 2023-03-08 23:49:09,566 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30964.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:49:19,301 INFO [train.py:898] (3/4) Epoch 9, batch 1900, loss[loss=0.1913, simple_loss=0.2789, pruned_loss=0.05188, over 18419.00 frames. ], tot_loss[loss=0.1978, simple_loss=0.2805, pruned_loss=0.05753, over 3572885.06 frames. ], batch size: 48, lr: 1.26e-02, grad_scale: 8.0 2023-03-08 23:49:34,717 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30986.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:49:41,118 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7070, 3.8502, 5.4510, 4.7415, 3.2661, 3.3214, 4.6518, 5.5028], device='cuda:3'), covar=tensor([0.0789, 0.1429, 0.0062, 0.0224, 0.0792, 0.0872, 0.0273, 0.0096], device='cuda:3'), in_proj_covar=tensor([0.0133, 0.0225, 0.0089, 0.0151, 0.0169, 0.0170, 0.0160, 0.0125], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0002, 0.0003, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-08 23:49:45,374 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30995.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:50:04,628 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31011.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 23:50:04,795 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31011.0, num_to_drop=1, layers_to_drop={0} 2023-03-08 23:50:10,166 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=31016.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:50:16,134 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6422, 3.0497, 4.2567, 4.0381, 2.9280, 4.7907, 3.9315, 3.0633], device='cuda:3'), covar=tensor([0.0361, 0.1084, 0.0208, 0.0238, 0.1206, 0.0114, 0.0390, 0.0783], device='cuda:3'), in_proj_covar=tensor([0.0174, 0.0206, 0.0137, 0.0131, 0.0201, 0.0172, 0.0192, 0.0185], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 23:50:17,962 INFO [train.py:898] (3/4) Epoch 9, batch 1950, loss[loss=0.1968, simple_loss=0.278, pruned_loss=0.05782, over 18507.00 frames. ], tot_loss[loss=0.1979, simple_loss=0.2806, pruned_loss=0.05756, over 3574439.72 frames. ], batch size: 47, lr: 1.26e-02, grad_scale: 8.0 2023-03-08 23:50:39,086 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.073e+02 3.300e+02 4.115e+02 5.228e+02 1.013e+03, threshold=8.231e+02, percent-clipped=3.0 2023-03-08 23:51:06,759 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=31064.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:51:16,308 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31072.0, num_to_drop=1, layers_to_drop={3} 2023-03-08 23:51:16,962 INFO [train.py:898] (3/4) Epoch 9, batch 2000, loss[loss=0.1925, simple_loss=0.2756, pruned_loss=0.05472, over 18499.00 frames. ], tot_loss[loss=0.1981, simple_loss=0.2811, pruned_loss=0.05761, over 3579423.25 frames. ], batch size: 44, lr: 1.26e-02, grad_scale: 8.0 2023-03-08 23:51:42,165 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6300, 2.7647, 4.1292, 4.1814, 2.7938, 4.6083, 3.9397, 2.6168], device='cuda:3'), covar=tensor([0.0362, 0.1327, 0.0272, 0.0176, 0.1359, 0.0166, 0.0341, 0.1087], device='cuda:3'), in_proj_covar=tensor([0.0174, 0.0206, 0.0136, 0.0129, 0.0199, 0.0173, 0.0191, 0.0185], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-08 23:52:15,795 INFO [train.py:898] (3/4) Epoch 9, batch 2050, loss[loss=0.1973, simple_loss=0.2866, pruned_loss=0.05407, over 17788.00 frames. ], tot_loss[loss=0.197, simple_loss=0.2798, pruned_loss=0.0571, over 3594209.25 frames. ], batch size: 70, lr: 1.26e-02, grad_scale: 8.0 2023-03-08 23:52:37,039 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.601e+02 3.647e+02 4.491e+02 5.379e+02 9.813e+02, threshold=8.982e+02, percent-clipped=3.0 2023-03-08 23:53:15,208 INFO [train.py:898] (3/4) Epoch 9, batch 2100, loss[loss=0.2081, simple_loss=0.2923, pruned_loss=0.06197, over 16245.00 frames. ], tot_loss[loss=0.1976, simple_loss=0.2805, pruned_loss=0.05731, over 3598295.49 frames. ], batch size: 94, lr: 1.26e-02, grad_scale: 8.0 2023-03-08 23:53:24,418 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8902, 4.7656, 4.9997, 4.6395, 4.5927, 4.6752, 5.1645, 4.9987], device='cuda:3'), covar=tensor([0.0057, 0.0079, 0.0066, 0.0088, 0.0072, 0.0144, 0.0057, 0.0106], device='cuda:3'), in_proj_covar=tensor([0.0075, 0.0055, 0.0056, 0.0070, 0.0060, 0.0081, 0.0068, 0.0070], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-08 23:54:14,808 INFO [train.py:898] (3/4) Epoch 9, batch 2150, loss[loss=0.1963, simple_loss=0.2755, pruned_loss=0.05853, over 18540.00 frames. ], tot_loss[loss=0.1977, simple_loss=0.2803, pruned_loss=0.05755, over 3585324.09 frames. ], batch size: 49, lr: 1.26e-02, grad_scale: 8.0 2023-03-08 23:54:35,294 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.335e+02 3.491e+02 4.023e+02 4.961e+02 1.002e+03, threshold=8.046e+02, percent-clipped=2.0 2023-03-08 23:54:56,976 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31259.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:55:13,348 INFO [train.py:898] (3/4) Epoch 9, batch 2200, loss[loss=0.1959, simple_loss=0.2897, pruned_loss=0.051, over 18360.00 frames. ], tot_loss[loss=0.1985, simple_loss=0.2809, pruned_loss=0.05805, over 3580387.28 frames. ], batch size: 55, lr: 1.26e-02, grad_scale: 4.0 2023-03-08 23:55:22,620 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31281.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:55:32,535 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31290.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:55:53,482 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.77 vs. limit=5.0 2023-03-08 23:55:57,388 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=31311.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:56:03,071 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4261, 6.0124, 5.4146, 5.7143, 5.4697, 5.4758, 6.0324, 6.0468], device='cuda:3'), covar=tensor([0.1109, 0.0684, 0.0434, 0.0672, 0.1579, 0.0682, 0.0558, 0.0579], device='cuda:3'), in_proj_covar=tensor([0.0497, 0.0407, 0.0307, 0.0446, 0.0609, 0.0445, 0.0564, 0.0435], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-08 23:56:11,961 INFO [train.py:898] (3/4) Epoch 9, batch 2250, loss[loss=0.1825, simple_loss=0.2586, pruned_loss=0.05323, over 18262.00 frames. ], tot_loss[loss=0.1976, simple_loss=0.28, pruned_loss=0.05763, over 3586662.01 frames. ], batch size: 45, lr: 1.26e-02, grad_scale: 4.0 2023-03-08 23:56:33,578 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.219e+02 3.737e+02 4.347e+02 5.503e+02 2.827e+03, threshold=8.695e+02, percent-clipped=9.0 2023-03-08 23:56:53,440 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=31359.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:57:02,911 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31367.0, num_to_drop=1, layers_to_drop={2} 2023-03-08 23:57:09,985 INFO [train.py:898] (3/4) Epoch 9, batch 2300, loss[loss=0.1949, simple_loss=0.2894, pruned_loss=0.05027, over 18346.00 frames. ], tot_loss[loss=0.1976, simple_loss=0.2804, pruned_loss=0.05744, over 3599800.25 frames. ], batch size: 55, lr: 1.26e-02, grad_scale: 4.0 2023-03-08 23:57:24,615 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9496, 4.6080, 4.7879, 3.4967, 3.7453, 3.7362, 2.6881, 2.0187], device='cuda:3'), covar=tensor([0.0185, 0.0157, 0.0054, 0.0223, 0.0343, 0.0168, 0.0684, 0.0909], device='cuda:3'), in_proj_covar=tensor([0.0054, 0.0044, 0.0043, 0.0056, 0.0076, 0.0052, 0.0070, 0.0076], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-08 23:57:26,823 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2721, 5.2545, 4.7105, 5.2372, 5.1723, 4.6183, 5.0801, 4.8525], device='cuda:3'), covar=tensor([0.0413, 0.0456, 0.1577, 0.0671, 0.0531, 0.0428, 0.0430, 0.0977], device='cuda:3'), in_proj_covar=tensor([0.0381, 0.0436, 0.0586, 0.0350, 0.0325, 0.0397, 0.0422, 0.0543], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-08 23:57:52,690 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31409.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:58:09,019 INFO [train.py:898] (3/4) Epoch 9, batch 2350, loss[loss=0.2036, simple_loss=0.2804, pruned_loss=0.06341, over 18291.00 frames. ], tot_loss[loss=0.1988, simple_loss=0.2814, pruned_loss=0.05811, over 3585076.45 frames. ], batch size: 49, lr: 1.25e-02, grad_scale: 4.0 2023-03-08 23:58:23,941 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4677, 5.9691, 5.5121, 5.6727, 5.5211, 5.5416, 6.0518, 5.9994], device='cuda:3'), covar=tensor([0.1161, 0.0741, 0.0445, 0.0747, 0.1383, 0.0663, 0.0552, 0.0647], device='cuda:3'), in_proj_covar=tensor([0.0487, 0.0401, 0.0302, 0.0438, 0.0598, 0.0435, 0.0555, 0.0427], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-08 23:58:31,513 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.260e+02 3.212e+02 3.852e+02 4.857e+02 1.133e+03, threshold=7.704e+02, percent-clipped=2.0 2023-03-08 23:58:42,686 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.88 vs. limit=5.0 2023-03-08 23:58:46,253 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.87 vs. limit=2.0 2023-03-08 23:58:48,204 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8590, 4.8585, 4.9082, 4.6700, 4.6344, 4.6707, 5.1101, 4.9477], device='cuda:3'), covar=tensor([0.0059, 0.0070, 0.0074, 0.0091, 0.0060, 0.0112, 0.0073, 0.0133], device='cuda:3'), in_proj_covar=tensor([0.0073, 0.0054, 0.0055, 0.0069, 0.0060, 0.0080, 0.0066, 0.0068], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0002, 0.0002, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-08 23:58:57,024 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4241, 5.9493, 5.4482, 5.6717, 5.4930, 5.5420, 5.9884, 5.9332], device='cuda:3'), covar=tensor([0.1234, 0.0640, 0.0428, 0.0700, 0.1513, 0.0698, 0.0580, 0.0633], device='cuda:3'), in_proj_covar=tensor([0.0490, 0.0402, 0.0303, 0.0439, 0.0601, 0.0437, 0.0557, 0.0429], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-08 23:59:04,354 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31470.0, num_to_drop=0, layers_to_drop=set() 2023-03-08 23:59:07,379 INFO [train.py:898] (3/4) Epoch 9, batch 2400, loss[loss=0.174, simple_loss=0.2481, pruned_loss=0.04995, over 18271.00 frames. ], tot_loss[loss=0.1986, simple_loss=0.2813, pruned_loss=0.05799, over 3589330.27 frames. ], batch size: 45, lr: 1.25e-02, grad_scale: 8.0 2023-03-08 23:59:29,303 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31491.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 00:00:06,631 INFO [train.py:898] (3/4) Epoch 9, batch 2450, loss[loss=0.1953, simple_loss=0.2823, pruned_loss=0.05413, over 18477.00 frames. ], tot_loss[loss=0.1992, simple_loss=0.2821, pruned_loss=0.05818, over 3583958.28 frames. ], batch size: 51, lr: 1.25e-02, grad_scale: 8.0 2023-03-09 00:00:11,718 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-09 00:00:29,272 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.799e+02 3.437e+02 4.058e+02 5.079e+02 1.109e+03, threshold=8.115e+02, percent-clipped=5.0 2023-03-09 00:00:41,033 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31552.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 00:00:49,341 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=31559.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:01:05,463 INFO [train.py:898] (3/4) Epoch 9, batch 2500, loss[loss=0.2145, simple_loss=0.2994, pruned_loss=0.06487, over 18240.00 frames. ], tot_loss[loss=0.1983, simple_loss=0.2811, pruned_loss=0.05771, over 3584038.81 frames. ], batch size: 60, lr: 1.25e-02, grad_scale: 8.0 2023-03-09 00:01:15,221 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=31581.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:01:25,985 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=31590.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:01:45,092 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=31607.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:01:55,827 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5199, 2.7236, 2.4373, 2.6804, 3.3923, 3.4579, 2.9183, 2.7565], device='cuda:3'), covar=tensor([0.0163, 0.0313, 0.0616, 0.0376, 0.0213, 0.0162, 0.0356, 0.0322], device='cuda:3'), in_proj_covar=tensor([0.0111, 0.0098, 0.0145, 0.0127, 0.0093, 0.0081, 0.0123, 0.0118], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 00:02:03,050 INFO [train.py:898] (3/4) Epoch 9, batch 2550, loss[loss=0.1823, simple_loss=0.2563, pruned_loss=0.0542, over 18401.00 frames. ], tot_loss[loss=0.1989, simple_loss=0.2816, pruned_loss=0.05809, over 3584441.81 frames. ], batch size: 42, lr: 1.25e-02, grad_scale: 8.0 2023-03-09 00:02:10,589 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=31629.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:02:21,868 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=31638.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:02:26,084 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.346e+02 3.602e+02 4.315e+02 5.683e+02 1.136e+03, threshold=8.630e+02, percent-clipped=7.0 2023-03-09 00:02:54,547 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=31667.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 00:02:58,183 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9075, 2.8715, 4.4751, 4.2806, 2.7270, 4.8557, 4.0048, 2.8880], device='cuda:3'), covar=tensor([0.0350, 0.1286, 0.0177, 0.0197, 0.1329, 0.0127, 0.0374, 0.0930], device='cuda:3'), in_proj_covar=tensor([0.0178, 0.0210, 0.0136, 0.0130, 0.0202, 0.0174, 0.0192, 0.0185], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 00:03:01,163 INFO [train.py:898] (3/4) Epoch 9, batch 2600, loss[loss=0.2046, simple_loss=0.2927, pruned_loss=0.05829, over 17849.00 frames. ], tot_loss[loss=0.1979, simple_loss=0.2807, pruned_loss=0.05754, over 3598954.68 frames. ], batch size: 70, lr: 1.25e-02, grad_scale: 8.0 2023-03-09 00:03:05,968 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5059, 3.7409, 5.2963, 4.6375, 3.2374, 3.1445, 4.4673, 5.3451], device='cuda:3'), covar=tensor([0.0857, 0.1617, 0.0063, 0.0244, 0.0878, 0.1021, 0.0328, 0.0102], device='cuda:3'), in_proj_covar=tensor([0.0134, 0.0229, 0.0090, 0.0153, 0.0170, 0.0171, 0.0160, 0.0127], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0002, 0.0003, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-09 00:03:37,688 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31704.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:03:50,207 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=31715.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 00:03:58,993 INFO [train.py:898] (3/4) Epoch 9, batch 2650, loss[loss=0.2214, simple_loss=0.3133, pruned_loss=0.06472, over 18345.00 frames. ], tot_loss[loss=0.1974, simple_loss=0.28, pruned_loss=0.05735, over 3601487.26 frames. ], batch size: 56, lr: 1.25e-02, grad_scale: 8.0 2023-03-09 00:04:11,219 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3798, 3.8812, 3.9535, 2.9273, 3.2406, 3.1424, 2.3568, 2.0154], device='cuda:3'), covar=tensor([0.0258, 0.0144, 0.0101, 0.0308, 0.0373, 0.0252, 0.0769, 0.0951], device='cuda:3'), in_proj_covar=tensor([0.0056, 0.0046, 0.0044, 0.0057, 0.0078, 0.0054, 0.0073, 0.0079], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 00:04:21,586 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.212e+02 3.574e+02 4.199e+02 5.233e+02 1.424e+03, threshold=8.398e+02, percent-clipped=3.0 2023-03-09 00:04:28,696 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.93 vs. limit=2.0 2023-03-09 00:04:48,289 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31765.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:04:48,386 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0754, 5.0648, 4.6198, 5.0405, 5.0103, 4.4691, 4.9324, 4.7103], device='cuda:3'), covar=tensor([0.0445, 0.0407, 0.1581, 0.0676, 0.0548, 0.0438, 0.0405, 0.0809], device='cuda:3'), in_proj_covar=tensor([0.0371, 0.0422, 0.0567, 0.0337, 0.0319, 0.0387, 0.0409, 0.0535], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 00:04:48,462 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31765.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:04:57,871 INFO [train.py:898] (3/4) Epoch 9, batch 2700, loss[loss=0.1912, simple_loss=0.2732, pruned_loss=0.05464, over 18551.00 frames. ], tot_loss[loss=0.1967, simple_loss=0.2795, pruned_loss=0.05699, over 3598668.51 frames. ], batch size: 49, lr: 1.25e-02, grad_scale: 8.0 2023-03-09 00:05:15,622 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31788.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:05:56,592 INFO [train.py:898] (3/4) Epoch 9, batch 2750, loss[loss=0.1856, simple_loss=0.2658, pruned_loss=0.05268, over 18575.00 frames. ], tot_loss[loss=0.1965, simple_loss=0.2789, pruned_loss=0.05704, over 3592834.65 frames. ], batch size: 45, lr: 1.25e-02, grad_scale: 8.0 2023-03-09 00:05:56,836 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31823.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:06:19,347 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.068e+02 3.248e+02 4.036e+02 4.734e+02 1.785e+03, threshold=8.071e+02, percent-clipped=3.0 2023-03-09 00:06:25,196 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31847.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 00:06:27,648 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31849.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:06:55,868 INFO [train.py:898] (3/4) Epoch 9, batch 2800, loss[loss=0.1844, simple_loss=0.2696, pruned_loss=0.04963, over 18374.00 frames. ], tot_loss[loss=0.197, simple_loss=0.2796, pruned_loss=0.05719, over 3587107.95 frames. ], batch size: 50, lr: 1.25e-02, grad_scale: 8.0 2023-03-09 00:07:02,111 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31878.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:07:09,087 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31884.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:07:26,287 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.41 vs. limit=2.0 2023-03-09 00:07:45,641 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31915.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:07:54,183 INFO [train.py:898] (3/4) Epoch 9, batch 2850, loss[loss=0.2026, simple_loss=0.2917, pruned_loss=0.05673, over 18620.00 frames. ], tot_loss[loss=0.1982, simple_loss=0.2805, pruned_loss=0.05791, over 3585263.70 frames. ], batch size: 52, lr: 1.25e-02, grad_scale: 8.0 2023-03-09 00:08:13,183 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31939.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:08:16,252 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.274e+02 3.441e+02 4.233e+02 5.343e+02 2.679e+03, threshold=8.467e+02, percent-clipped=8.0 2023-03-09 00:08:26,686 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31950.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:08:40,869 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3622, 3.2496, 1.8259, 4.1458, 2.6679, 4.1509, 2.0915, 3.5611], device='cuda:3'), covar=tensor([0.0601, 0.0759, 0.1434, 0.0416, 0.0930, 0.0281, 0.1205, 0.0386], device='cuda:3'), in_proj_covar=tensor([0.0179, 0.0203, 0.0175, 0.0215, 0.0171, 0.0221, 0.0184, 0.0178], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 00:08:52,762 INFO [train.py:898] (3/4) Epoch 9, batch 2900, loss[loss=0.2041, simple_loss=0.2873, pruned_loss=0.0604, over 18631.00 frames. ], tot_loss[loss=0.1986, simple_loss=0.281, pruned_loss=0.05813, over 3581414.93 frames. ], batch size: 52, lr: 1.24e-02, grad_scale: 8.0 2023-03-09 00:08:56,521 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31976.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:09:09,564 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6883, 3.5399, 4.7566, 2.8361, 4.0199, 2.4678, 2.8390, 1.9247], device='cuda:3'), covar=tensor([0.0934, 0.0806, 0.0089, 0.0631, 0.0585, 0.2327, 0.2391, 0.1739], device='cuda:3'), in_proj_covar=tensor([0.0190, 0.0211, 0.0107, 0.0161, 0.0222, 0.0243, 0.0270, 0.0205], device='cuda:3'), out_proj_covar=tensor([1.6863e-04, 1.9231e-04, 9.8623e-05, 1.4580e-04, 2.0216e-04, 2.2339e-04, 2.4574e-04, 1.9012e-04], device='cuda:3') 2023-03-09 00:09:43,607 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=32011.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:09:56,715 INFO [train.py:898] (3/4) Epoch 9, batch 2950, loss[loss=0.1833, simple_loss=0.271, pruned_loss=0.04778, over 18389.00 frames. ], tot_loss[loss=0.1984, simple_loss=0.2809, pruned_loss=0.05792, over 3577766.12 frames. ], batch size: 48, lr: 1.24e-02, grad_scale: 8.0 2023-03-09 00:10:19,142 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.223e+02 3.183e+02 3.919e+02 4.606e+02 8.522e+02, threshold=7.838e+02, percent-clipped=1.0 2023-03-09 00:10:41,319 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=32060.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:10:47,660 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32065.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:10:56,315 INFO [train.py:898] (3/4) Epoch 9, batch 3000, loss[loss=0.223, simple_loss=0.3085, pruned_loss=0.06872, over 18252.00 frames. ], tot_loss[loss=0.198, simple_loss=0.2807, pruned_loss=0.05765, over 3579209.56 frames. ], batch size: 57, lr: 1.24e-02, grad_scale: 8.0 2023-03-09 00:10:56,316 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 00:11:08,348 INFO [train.py:932] (3/4) Epoch 9, validation: loss=0.1618, simple_loss=0.2644, pruned_loss=0.02958, over 944034.00 frames. 2023-03-09 00:11:08,348 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 00:11:08,527 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2340, 4.9880, 5.3333, 5.1842, 5.1254, 5.8370, 5.3929, 5.3167], device='cuda:3'), covar=tensor([0.0860, 0.0633, 0.0657, 0.0634, 0.1318, 0.0764, 0.0657, 0.1303], device='cuda:3'), in_proj_covar=tensor([0.0294, 0.0221, 0.0230, 0.0234, 0.0276, 0.0329, 0.0219, 0.0321], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-09 00:11:42,259 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9391, 4.9291, 4.5169, 4.8637, 4.8712, 4.3637, 4.8088, 4.5840], device='cuda:3'), covar=tensor([0.0416, 0.0450, 0.1432, 0.0724, 0.0495, 0.0395, 0.0396, 0.0839], device='cuda:3'), in_proj_covar=tensor([0.0379, 0.0435, 0.0576, 0.0345, 0.0322, 0.0395, 0.0419, 0.0540], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 00:11:55,410 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=32113.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:12:07,320 INFO [train.py:898] (3/4) Epoch 9, batch 3050, loss[loss=0.2112, simple_loss=0.2928, pruned_loss=0.06483, over 18479.00 frames. ], tot_loss[loss=0.1971, simple_loss=0.2797, pruned_loss=0.05728, over 3586233.64 frames. ], batch size: 59, lr: 1.24e-02, grad_scale: 8.0 2023-03-09 00:12:29,403 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.233e+02 3.379e+02 3.882e+02 4.685e+02 8.666e+02, threshold=7.765e+02, percent-clipped=1.0 2023-03-09 00:12:32,448 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=32144.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:12:35,887 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32147.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 00:12:59,822 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.77 vs. limit=2.0 2023-03-09 00:13:05,875 INFO [train.py:898] (3/4) Epoch 9, batch 3100, loss[loss=0.201, simple_loss=0.2816, pruned_loss=0.06018, over 18537.00 frames. ], tot_loss[loss=0.1971, simple_loss=0.2795, pruned_loss=0.05737, over 3569906.60 frames. ], batch size: 49, lr: 1.24e-02, grad_scale: 8.0 2023-03-09 00:13:13,303 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=32179.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:13:31,151 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8352, 3.7559, 3.5438, 3.0935, 3.5496, 2.9298, 2.9700, 3.6547], device='cuda:3'), covar=tensor([0.0031, 0.0065, 0.0069, 0.0109, 0.0065, 0.0134, 0.0144, 0.0053], device='cuda:3'), in_proj_covar=tensor([0.0080, 0.0103, 0.0095, 0.0141, 0.0093, 0.0137, 0.0145, 0.0077], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0001, 0.0001, 0.0002, 0.0001, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 00:13:32,014 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=32195.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 00:14:05,293 INFO [train.py:898] (3/4) Epoch 9, batch 3150, loss[loss=0.1883, simple_loss=0.2713, pruned_loss=0.05268, over 18382.00 frames. ], tot_loss[loss=0.1969, simple_loss=0.2795, pruned_loss=0.05716, over 3563478.75 frames. ], batch size: 50, lr: 1.24e-02, grad_scale: 8.0 2023-03-09 00:14:18,463 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=32234.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:14:28,066 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.206e+02 3.166e+02 3.799e+02 4.714e+02 1.226e+03, threshold=7.598e+02, percent-clipped=2.0 2023-03-09 00:15:02,108 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=32271.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:15:04,020 INFO [train.py:898] (3/4) Epoch 9, batch 3200, loss[loss=0.1938, simple_loss=0.2841, pruned_loss=0.05174, over 18472.00 frames. ], tot_loss[loss=0.197, simple_loss=0.2797, pruned_loss=0.05716, over 3564666.79 frames. ], batch size: 53, lr: 1.24e-02, grad_scale: 8.0 2023-03-09 00:15:38,685 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1540, 5.1674, 4.7120, 5.1730, 5.1472, 4.5292, 5.0859, 4.8276], device='cuda:3'), covar=tensor([0.0446, 0.0406, 0.1563, 0.0630, 0.0476, 0.0401, 0.0355, 0.0870], device='cuda:3'), in_proj_covar=tensor([0.0381, 0.0433, 0.0576, 0.0342, 0.0317, 0.0395, 0.0415, 0.0542], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 00:15:43,084 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=32306.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:15:49,796 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-09 00:16:02,574 INFO [train.py:898] (3/4) Epoch 9, batch 3250, loss[loss=0.2009, simple_loss=0.285, pruned_loss=0.05838, over 17811.00 frames. ], tot_loss[loss=0.1965, simple_loss=0.2792, pruned_loss=0.05692, over 3570385.00 frames. ], batch size: 70, lr: 1.24e-02, grad_scale: 8.0 2023-03-09 00:16:24,633 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.580e+02 3.470e+02 4.080e+02 5.186e+02 8.555e+02, threshold=8.161e+02, percent-clipped=3.0 2023-03-09 00:16:46,475 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32360.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:17:01,461 INFO [train.py:898] (3/4) Epoch 9, batch 3300, loss[loss=0.1888, simple_loss=0.2635, pruned_loss=0.05706, over 18411.00 frames. ], tot_loss[loss=0.1961, simple_loss=0.2786, pruned_loss=0.05678, over 3580593.77 frames. ], batch size: 42, lr: 1.24e-02, grad_scale: 8.0 2023-03-09 00:17:43,084 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=32408.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:18:00,701 INFO [train.py:898] (3/4) Epoch 9, batch 3350, loss[loss=0.1829, simple_loss=0.2673, pruned_loss=0.04929, over 18282.00 frames. ], tot_loss[loss=0.1958, simple_loss=0.2785, pruned_loss=0.05654, over 3585112.36 frames. ], batch size: 49, lr: 1.24e-02, grad_scale: 8.0 2023-03-09 00:18:22,501 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.160e+02 3.179e+02 3.976e+02 5.107e+02 1.332e+03, threshold=7.951e+02, percent-clipped=3.0 2023-03-09 00:18:25,093 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32444.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:18:59,648 INFO [train.py:898] (3/4) Epoch 9, batch 3400, loss[loss=0.1686, simple_loss=0.2422, pruned_loss=0.04749, over 18406.00 frames. ], tot_loss[loss=0.1959, simple_loss=0.2784, pruned_loss=0.05675, over 3583719.86 frames. ], batch size: 42, lr: 1.23e-02, grad_scale: 8.0 2023-03-09 00:19:06,755 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32479.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:19:21,714 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=32492.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:19:57,942 INFO [train.py:898] (3/4) Epoch 9, batch 3450, loss[loss=0.1961, simple_loss=0.2738, pruned_loss=0.0592, over 18482.00 frames. ], tot_loss[loss=0.1963, simple_loss=0.2789, pruned_loss=0.05688, over 3583335.37 frames. ], batch size: 44, lr: 1.23e-02, grad_scale: 8.0 2023-03-09 00:20:02,596 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=32527.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:20:10,646 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32534.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:20:19,239 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.059e+02 3.448e+02 3.896e+02 4.764e+02 9.293e+02, threshold=7.793e+02, percent-clipped=1.0 2023-03-09 00:20:22,470 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3279, 2.5970, 2.5073, 2.6336, 3.4783, 3.2041, 2.9297, 2.7904], device='cuda:3'), covar=tensor([0.0315, 0.0377, 0.0631, 0.0394, 0.0231, 0.0229, 0.0392, 0.0322], device='cuda:3'), in_proj_covar=tensor([0.0106, 0.0099, 0.0144, 0.0128, 0.0092, 0.0080, 0.0123, 0.0117], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 00:20:53,966 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32571.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:20:56,534 INFO [train.py:898] (3/4) Epoch 9, batch 3500, loss[loss=0.184, simple_loss=0.2685, pruned_loss=0.04974, over 18390.00 frames. ], tot_loss[loss=0.1976, simple_loss=0.2798, pruned_loss=0.05767, over 3552259.90 frames. ], batch size: 52, lr: 1.23e-02, grad_scale: 8.0 2023-03-09 00:21:06,899 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=32582.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:21:34,271 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32606.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:21:47,770 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=32619.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:21:52,017 INFO [train.py:898] (3/4) Epoch 9, batch 3550, loss[loss=0.2035, simple_loss=0.2824, pruned_loss=0.0623, over 18376.00 frames. ], tot_loss[loss=0.1979, simple_loss=0.2803, pruned_loss=0.0578, over 3569780.45 frames. ], batch size: 56, lr: 1.23e-02, grad_scale: 8.0 2023-03-09 00:22:12,489 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.120e+02 3.503e+02 4.111e+02 5.019e+02 1.415e+03, threshold=8.222e+02, percent-clipped=4.0 2023-03-09 00:22:25,468 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=32654.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:22:45,802 INFO [train.py:898] (3/4) Epoch 9, batch 3600, loss[loss=0.2131, simple_loss=0.2949, pruned_loss=0.06569, over 17614.00 frames. ], tot_loss[loss=0.1977, simple_loss=0.2802, pruned_loss=0.05762, over 3568780.68 frames. ], batch size: 70, lr: 1.23e-02, grad_scale: 8.0 2023-03-09 00:22:47,103 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7662, 5.2990, 5.3351, 5.3517, 4.8491, 5.2108, 4.3554, 5.1510], device='cuda:3'), covar=tensor([0.0215, 0.0361, 0.0212, 0.0299, 0.0356, 0.0229, 0.1385, 0.0325], device='cuda:3'), in_proj_covar=tensor([0.0161, 0.0207, 0.0194, 0.0223, 0.0206, 0.0213, 0.0273, 0.0198], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 00:23:53,331 INFO [train.py:898] (3/4) Epoch 10, batch 0, loss[loss=0.2037, simple_loss=0.2896, pruned_loss=0.05886, over 18263.00 frames. ], tot_loss[loss=0.2037, simple_loss=0.2896, pruned_loss=0.05886, over 18263.00 frames. ], batch size: 57, lr: 1.17e-02, grad_scale: 8.0 2023-03-09 00:23:53,332 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 00:24:05,217 INFO [train.py:932] (3/4) Epoch 10, validation: loss=0.1621, simple_loss=0.2651, pruned_loss=0.02958, over 944034.00 frames. 2023-03-09 00:24:05,217 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 00:24:37,549 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4304, 3.6935, 5.2701, 4.5009, 3.2237, 3.1424, 4.5757, 5.3070], device='cuda:3'), covar=tensor([0.0890, 0.1542, 0.0089, 0.0284, 0.0842, 0.1000, 0.0316, 0.0097], device='cuda:3'), in_proj_covar=tensor([0.0132, 0.0228, 0.0090, 0.0152, 0.0169, 0.0169, 0.0160, 0.0128], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0002, 0.0003, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-09 00:24:43,990 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.48 vs. limit=2.0 2023-03-09 00:24:46,643 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.116e+02 3.572e+02 4.322e+02 5.704e+02 1.376e+03, threshold=8.645e+02, percent-clipped=6.0 2023-03-09 00:25:03,987 INFO [train.py:898] (3/4) Epoch 10, batch 50, loss[loss=0.2094, simple_loss=0.2966, pruned_loss=0.06106, over 18485.00 frames. ], tot_loss[loss=0.1958, simple_loss=0.2786, pruned_loss=0.05651, over 815989.94 frames. ], batch size: 53, lr: 1.17e-02, grad_scale: 8.0 2023-03-09 00:25:44,252 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7731, 5.3971, 5.3628, 5.2903, 4.9326, 5.2504, 4.6035, 5.1849], device='cuda:3'), covar=tensor([0.0224, 0.0278, 0.0177, 0.0299, 0.0377, 0.0239, 0.1309, 0.0266], device='cuda:3'), in_proj_covar=tensor([0.0161, 0.0205, 0.0192, 0.0221, 0.0204, 0.0211, 0.0268, 0.0196], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 00:25:48,915 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3806, 5.1574, 5.4911, 5.4084, 5.3413, 6.0069, 5.6416, 5.4851], device='cuda:3'), covar=tensor([0.0910, 0.0613, 0.0723, 0.0717, 0.1284, 0.0706, 0.0637, 0.1622], device='cuda:3'), in_proj_covar=tensor([0.0294, 0.0226, 0.0237, 0.0240, 0.0276, 0.0331, 0.0218, 0.0330], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-09 00:26:02,543 INFO [train.py:898] (3/4) Epoch 10, batch 100, loss[loss=0.1816, simple_loss=0.2532, pruned_loss=0.05496, over 18398.00 frames. ], tot_loss[loss=0.1992, simple_loss=0.2816, pruned_loss=0.05835, over 1427505.27 frames. ], batch size: 42, lr: 1.17e-02, grad_scale: 8.0 2023-03-09 00:26:44,170 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.347e+02 3.358e+02 3.908e+02 4.733e+02 8.989e+02, threshold=7.816e+02, percent-clipped=2.0 2023-03-09 00:26:58,141 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3035, 2.5153, 2.3925, 2.6618, 3.4006, 3.3972, 2.8792, 2.7252], device='cuda:3'), covar=tensor([0.0153, 0.0283, 0.0595, 0.0359, 0.0171, 0.0135, 0.0368, 0.0324], device='cuda:3'), in_proj_covar=tensor([0.0110, 0.0099, 0.0145, 0.0131, 0.0094, 0.0080, 0.0127, 0.0119], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 00:27:01,143 INFO [train.py:898] (3/4) Epoch 10, batch 150, loss[loss=0.2134, simple_loss=0.3, pruned_loss=0.06338, over 17118.00 frames. ], tot_loss[loss=0.1993, simple_loss=0.2821, pruned_loss=0.05825, over 1898525.19 frames. ], batch size: 78, lr: 1.17e-02, grad_scale: 8.0 2023-03-09 00:27:43,730 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-09 00:27:59,661 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3614, 2.6609, 2.5125, 2.7048, 3.4427, 3.3883, 2.9079, 2.7185], device='cuda:3'), covar=tensor([0.0175, 0.0264, 0.0503, 0.0326, 0.0184, 0.0163, 0.0336, 0.0289], device='cuda:3'), in_proj_covar=tensor([0.0109, 0.0097, 0.0143, 0.0129, 0.0093, 0.0080, 0.0124, 0.0117], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 00:28:00,322 INFO [train.py:898] (3/4) Epoch 10, batch 200, loss[loss=0.2201, simple_loss=0.2963, pruned_loss=0.07195, over 18284.00 frames. ], tot_loss[loss=0.1983, simple_loss=0.2813, pruned_loss=0.05764, over 2265300.49 frames. ], batch size: 57, lr: 1.17e-02, grad_scale: 8.0 2023-03-09 00:28:18,477 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6546, 2.0254, 2.6485, 2.7474, 3.4159, 5.1779, 4.5947, 4.0544], device='cuda:3'), covar=tensor([0.1105, 0.1929, 0.2207, 0.1277, 0.1643, 0.0070, 0.0341, 0.0425], device='cuda:3'), in_proj_covar=tensor([0.0225, 0.0281, 0.0291, 0.0240, 0.0349, 0.0170, 0.0243, 0.0191], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 00:28:42,826 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.147e+02 3.153e+02 3.905e+02 4.894e+02 1.439e+03, threshold=7.811e+02, percent-clipped=2.0 2023-03-09 00:28:59,424 INFO [train.py:898] (3/4) Epoch 10, batch 250, loss[loss=0.2036, simple_loss=0.2837, pruned_loss=0.06173, over 17008.00 frames. ], tot_loss[loss=0.1969, simple_loss=0.2804, pruned_loss=0.05669, over 2560015.04 frames. ], batch size: 78, lr: 1.17e-02, grad_scale: 4.0 2023-03-09 00:29:09,956 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=32966.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:29:58,489 INFO [train.py:898] (3/4) Epoch 10, batch 300, loss[loss=0.1963, simple_loss=0.2787, pruned_loss=0.05699, over 18495.00 frames. ], tot_loss[loss=0.1956, simple_loss=0.279, pruned_loss=0.05616, over 2781483.49 frames. ], batch size: 51, lr: 1.16e-02, grad_scale: 4.0 2023-03-09 00:30:22,070 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=33027.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:30:40,677 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.108e+02 3.425e+02 4.260e+02 4.893e+02 1.115e+03, threshold=8.520e+02, percent-clipped=1.0 2023-03-09 00:30:57,414 INFO [train.py:898] (3/4) Epoch 10, batch 350, loss[loss=0.1845, simple_loss=0.2562, pruned_loss=0.0564, over 17592.00 frames. ], tot_loss[loss=0.1948, simple_loss=0.2778, pruned_loss=0.0559, over 2955112.07 frames. ], batch size: 39, lr: 1.16e-02, grad_scale: 4.0 2023-03-09 00:31:11,115 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=33069.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:31:26,573 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.99 vs. limit=2.0 2023-03-09 00:31:27,722 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.90 vs. limit=5.0 2023-03-09 00:31:55,095 INFO [train.py:898] (3/4) Epoch 10, batch 400, loss[loss=0.1954, simple_loss=0.2822, pruned_loss=0.05431, over 18482.00 frames. ], tot_loss[loss=0.1947, simple_loss=0.2781, pruned_loss=0.05562, over 3105774.88 frames. ], batch size: 51, lr: 1.16e-02, grad_scale: 8.0 2023-03-09 00:32:22,448 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=33130.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:32:37,085 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.379e+02 3.296e+02 4.038e+02 4.885e+02 1.161e+03, threshold=8.076e+02, percent-clipped=2.0 2023-03-09 00:32:37,673 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2058, 4.1407, 5.3036, 2.9008, 4.3844, 2.7075, 3.1511, 2.1091], device='cuda:3'), covar=tensor([0.0759, 0.0605, 0.0066, 0.0672, 0.0542, 0.2083, 0.2318, 0.1623], device='cuda:3'), in_proj_covar=tensor([0.0192, 0.0209, 0.0110, 0.0163, 0.0225, 0.0242, 0.0268, 0.0204], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 00:32:51,658 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=33155.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:32:53,558 INFO [train.py:898] (3/4) Epoch 10, batch 450, loss[loss=0.1762, simple_loss=0.2503, pruned_loss=0.05109, over 18498.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.2778, pruned_loss=0.05563, over 3207706.70 frames. ], batch size: 44, lr: 1.16e-02, grad_scale: 8.0 2023-03-09 00:33:52,228 INFO [train.py:898] (3/4) Epoch 10, batch 500, loss[loss=0.2096, simple_loss=0.2908, pruned_loss=0.06414, over 18610.00 frames. ], tot_loss[loss=0.1952, simple_loss=0.2787, pruned_loss=0.05583, over 3292004.61 frames. ], batch size: 52, lr: 1.16e-02, grad_scale: 8.0 2023-03-09 00:34:03,246 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=33216.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:34:33,502 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.970e+02 3.300e+02 3.836e+02 4.921e+02 1.033e+03, threshold=7.671e+02, percent-clipped=3.0 2023-03-09 00:34:49,827 INFO [train.py:898] (3/4) Epoch 10, batch 550, loss[loss=0.1842, simple_loss=0.2614, pruned_loss=0.05346, over 17680.00 frames. ], tot_loss[loss=0.1952, simple_loss=0.2786, pruned_loss=0.05586, over 3363715.79 frames. ], batch size: 39, lr: 1.16e-02, grad_scale: 8.0 2023-03-09 00:34:55,306 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1321, 5.1340, 4.6862, 5.1012, 5.0689, 4.4201, 4.9811, 4.7106], device='cuda:3'), covar=tensor([0.0433, 0.0453, 0.1516, 0.0717, 0.0510, 0.0429, 0.0400, 0.0966], device='cuda:3'), in_proj_covar=tensor([0.0386, 0.0446, 0.0586, 0.0348, 0.0328, 0.0404, 0.0422, 0.0555], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 00:35:02,226 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3916, 2.6387, 2.2048, 2.6028, 3.4051, 3.4798, 2.8037, 2.6167], device='cuda:3'), covar=tensor([0.0218, 0.0297, 0.0658, 0.0300, 0.0207, 0.0145, 0.0422, 0.0352], device='cuda:3'), in_proj_covar=tensor([0.0107, 0.0098, 0.0145, 0.0130, 0.0094, 0.0080, 0.0124, 0.0117], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 00:35:48,948 INFO [train.py:898] (3/4) Epoch 10, batch 600, loss[loss=0.182, simple_loss=0.263, pruned_loss=0.05048, over 18284.00 frames. ], tot_loss[loss=0.1943, simple_loss=0.2779, pruned_loss=0.05537, over 3422506.59 frames. ], batch size: 49, lr: 1.16e-02, grad_scale: 8.0 2023-03-09 00:35:49,492 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5838, 2.1639, 2.8067, 2.8198, 3.4107, 5.2533, 4.7423, 4.1745], device='cuda:3'), covar=tensor([0.1091, 0.1776, 0.2063, 0.1160, 0.1576, 0.0092, 0.0304, 0.0393], device='cuda:3'), in_proj_covar=tensor([0.0227, 0.0284, 0.0293, 0.0241, 0.0351, 0.0171, 0.0247, 0.0192], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 00:35:50,402 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.3189, 4.7227, 4.4645, 4.5977, 4.3247, 4.4711, 4.7690, 4.6938], device='cuda:3'), covar=tensor([0.1230, 0.0733, 0.1339, 0.0741, 0.1504, 0.0613, 0.0754, 0.0732], device='cuda:3'), in_proj_covar=tensor([0.0504, 0.0415, 0.0306, 0.0457, 0.0611, 0.0458, 0.0587, 0.0435], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 00:35:52,881 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4340, 3.7900, 5.2579, 4.4966, 3.0562, 3.1184, 4.4989, 5.3615], device='cuda:3'), covar=tensor([0.0868, 0.1533, 0.0088, 0.0277, 0.0951, 0.1038, 0.0343, 0.0123], device='cuda:3'), in_proj_covar=tensor([0.0139, 0.0240, 0.0095, 0.0159, 0.0177, 0.0177, 0.0169, 0.0135], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-09 00:36:06,529 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=33322.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:36:13,974 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1774, 5.4891, 2.9468, 5.3257, 5.2380, 5.5855, 5.4244, 3.0259], device='cuda:3'), covar=tensor([0.0157, 0.0070, 0.0695, 0.0065, 0.0068, 0.0060, 0.0078, 0.0830], device='cuda:3'), in_proj_covar=tensor([0.0076, 0.0063, 0.0087, 0.0079, 0.0073, 0.0062, 0.0075, 0.0089], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0005, 0.0004, 0.0004, 0.0003, 0.0004, 0.0005], device='cuda:3') 2023-03-09 00:36:30,340 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.217e+02 3.224e+02 3.633e+02 4.391e+02 9.720e+02, threshold=7.266e+02, percent-clipped=2.0 2023-03-09 00:36:46,439 INFO [train.py:898] (3/4) Epoch 10, batch 650, loss[loss=0.1774, simple_loss=0.2624, pruned_loss=0.04625, over 18270.00 frames. ], tot_loss[loss=0.1943, simple_loss=0.2779, pruned_loss=0.05532, over 3447257.69 frames. ], batch size: 45, lr: 1.16e-02, grad_scale: 8.0 2023-03-09 00:37:45,672 INFO [train.py:898] (3/4) Epoch 10, batch 700, loss[loss=0.1738, simple_loss=0.2524, pruned_loss=0.04764, over 18442.00 frames. ], tot_loss[loss=0.1942, simple_loss=0.2778, pruned_loss=0.0553, over 3483298.99 frames. ], batch size: 43, lr: 1.16e-02, grad_scale: 8.0 2023-03-09 00:37:47,066 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4598, 5.0388, 5.0588, 5.0485, 4.6099, 4.9502, 4.3892, 4.9107], device='cuda:3'), covar=tensor([0.0234, 0.0285, 0.0190, 0.0296, 0.0352, 0.0229, 0.1019, 0.0266], device='cuda:3'), in_proj_covar=tensor([0.0167, 0.0211, 0.0196, 0.0228, 0.0209, 0.0216, 0.0275, 0.0201], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 00:37:47,222 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4456, 3.5487, 4.9560, 4.1608, 2.8995, 2.8813, 4.0187, 5.0307], device='cuda:3'), covar=tensor([0.0835, 0.1421, 0.0091, 0.0324, 0.0979, 0.1189, 0.0448, 0.0169], device='cuda:3'), in_proj_covar=tensor([0.0136, 0.0236, 0.0094, 0.0157, 0.0175, 0.0175, 0.0167, 0.0134], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-09 00:38:07,271 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=33425.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:38:27,844 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.134e+02 3.209e+02 3.744e+02 4.761e+02 1.041e+03, threshold=7.488e+02, percent-clipped=6.0 2023-03-09 00:38:44,037 INFO [train.py:898] (3/4) Epoch 10, batch 750, loss[loss=0.1948, simple_loss=0.2815, pruned_loss=0.05401, over 18583.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.2783, pruned_loss=0.05534, over 3514177.90 frames. ], batch size: 54, lr: 1.16e-02, grad_scale: 8.0 2023-03-09 00:38:58,037 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-09 00:39:01,862 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=33472.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:39:42,267 INFO [train.py:898] (3/4) Epoch 10, batch 800, loss[loss=0.1692, simple_loss=0.2521, pruned_loss=0.04317, over 18573.00 frames. ], tot_loss[loss=0.1936, simple_loss=0.2774, pruned_loss=0.05495, over 3545167.60 frames. ], batch size: 45, lr: 1.16e-02, grad_scale: 8.0 2023-03-09 00:39:47,421 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=33511.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:40:13,145 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6567, 4.1220, 4.4197, 3.3334, 3.6343, 3.4887, 2.6285, 2.0917], device='cuda:3'), covar=tensor([0.0189, 0.0168, 0.0051, 0.0237, 0.0254, 0.0207, 0.0613, 0.0878], device='cuda:3'), in_proj_covar=tensor([0.0056, 0.0046, 0.0044, 0.0056, 0.0076, 0.0054, 0.0069, 0.0075], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 00:40:13,186 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=33533.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:40:24,515 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.088e+02 3.240e+02 4.102e+02 4.783e+02 1.200e+03, threshold=8.205e+02, percent-clipped=1.0 2023-03-09 00:40:40,563 INFO [train.py:898] (3/4) Epoch 10, batch 850, loss[loss=0.1739, simple_loss=0.2556, pruned_loss=0.04611, over 18488.00 frames. ], tot_loss[loss=0.1948, simple_loss=0.2785, pruned_loss=0.05555, over 3555656.51 frames. ], batch size: 47, lr: 1.16e-02, grad_scale: 8.0 2023-03-09 00:41:39,910 INFO [train.py:898] (3/4) Epoch 10, batch 900, loss[loss=0.1846, simple_loss=0.2758, pruned_loss=0.04665, over 18297.00 frames. ], tot_loss[loss=0.1944, simple_loss=0.278, pruned_loss=0.0554, over 3559160.94 frames. ], batch size: 54, lr: 1.15e-02, grad_scale: 8.0 2023-03-09 00:41:45,551 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7698, 3.7978, 5.2280, 4.4519, 3.3401, 3.1951, 4.5522, 5.3843], device='cuda:3'), covar=tensor([0.0801, 0.1661, 0.0081, 0.0309, 0.0866, 0.1030, 0.0349, 0.0199], device='cuda:3'), in_proj_covar=tensor([0.0134, 0.0231, 0.0093, 0.0155, 0.0172, 0.0172, 0.0165, 0.0134], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-09 00:41:57,604 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=33622.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:42:22,396 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.227e+02 3.532e+02 4.038e+02 4.740e+02 8.485e+02, threshold=8.075e+02, percent-clipped=1.0 2023-03-09 00:42:38,550 INFO [train.py:898] (3/4) Epoch 10, batch 950, loss[loss=0.1545, simple_loss=0.2349, pruned_loss=0.03709, over 18496.00 frames. ], tot_loss[loss=0.1937, simple_loss=0.2771, pruned_loss=0.05512, over 3567318.95 frames. ], batch size: 44, lr: 1.15e-02, grad_scale: 8.0 2023-03-09 00:42:53,739 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=33670.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:43:01,695 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=33677.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:43:13,773 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5515, 1.8525, 2.4909, 2.6999, 2.9339, 4.9805, 4.3636, 3.6429], device='cuda:3'), covar=tensor([0.1331, 0.2524, 0.2850, 0.1449, 0.2603, 0.0118, 0.0421, 0.0538], device='cuda:3'), in_proj_covar=tensor([0.0226, 0.0282, 0.0294, 0.0240, 0.0350, 0.0171, 0.0247, 0.0192], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 00:43:19,753 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-09 00:43:36,486 INFO [train.py:898] (3/4) Epoch 10, batch 1000, loss[loss=0.2127, simple_loss=0.2965, pruned_loss=0.0644, over 18578.00 frames. ], tot_loss[loss=0.1946, simple_loss=0.2777, pruned_loss=0.05568, over 3566587.32 frames. ], batch size: 54, lr: 1.15e-02, grad_scale: 8.0 2023-03-09 00:43:57,425 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=33725.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:44:02,644 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6203, 3.1584, 4.0620, 2.7125, 3.6525, 2.5109, 2.5947, 2.3003], device='cuda:3'), covar=tensor([0.0741, 0.0688, 0.0134, 0.0488, 0.0519, 0.1900, 0.2084, 0.1304], device='cuda:3'), in_proj_covar=tensor([0.0192, 0.0208, 0.0110, 0.0161, 0.0222, 0.0241, 0.0270, 0.0205], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 00:44:13,358 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=33738.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 00:44:18,695 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.319e+02 3.380e+02 3.936e+02 4.816e+02 9.502e+02, threshold=7.872e+02, percent-clipped=1.0 2023-03-09 00:44:26,524 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5180, 5.4577, 5.0256, 5.4212, 5.4234, 4.7340, 5.3631, 5.0328], device='cuda:3'), covar=tensor([0.0374, 0.0435, 0.1302, 0.0772, 0.0490, 0.0455, 0.0355, 0.0859], device='cuda:3'), in_proj_covar=tensor([0.0386, 0.0451, 0.0594, 0.0355, 0.0331, 0.0409, 0.0429, 0.0561], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 00:44:35,202 INFO [train.py:898] (3/4) Epoch 10, batch 1050, loss[loss=0.1848, simple_loss=0.2665, pruned_loss=0.05155, over 18261.00 frames. ], tot_loss[loss=0.1947, simple_loss=0.2779, pruned_loss=0.05568, over 3572139.76 frames. ], batch size: 47, lr: 1.15e-02, grad_scale: 8.0 2023-03-09 00:44:44,453 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9829, 4.0802, 2.3781, 4.1837, 4.9731, 2.4685, 3.4733, 3.8075], device='cuda:3'), covar=tensor([0.0094, 0.1041, 0.1624, 0.0557, 0.0053, 0.1421, 0.0852, 0.0719], device='cuda:3'), in_proj_covar=tensor([0.0108, 0.0220, 0.0185, 0.0187, 0.0086, 0.0173, 0.0198, 0.0203], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 00:44:53,724 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=33773.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:45:20,142 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6346, 5.2437, 5.1806, 5.1837, 4.7385, 5.0370, 4.4332, 5.0401], device='cuda:3'), covar=tensor([0.0226, 0.0254, 0.0195, 0.0305, 0.0364, 0.0227, 0.1184, 0.0256], device='cuda:3'), in_proj_covar=tensor([0.0166, 0.0210, 0.0194, 0.0227, 0.0207, 0.0215, 0.0276, 0.0197], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 00:45:22,833 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.50 vs. limit=2.0 2023-03-09 00:45:34,464 INFO [train.py:898] (3/4) Epoch 10, batch 1100, loss[loss=0.2013, simple_loss=0.2894, pruned_loss=0.0566, over 18268.00 frames. ], tot_loss[loss=0.1942, simple_loss=0.2779, pruned_loss=0.05524, over 3590840.23 frames. ], batch size: 57, lr: 1.15e-02, grad_scale: 8.0 2023-03-09 00:45:39,320 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=33811.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:45:58,709 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=33828.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:46:16,690 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.260e+02 3.207e+02 4.126e+02 4.735e+02 1.316e+03, threshold=8.252e+02, percent-clipped=4.0 2023-03-09 00:46:32,947 INFO [train.py:898] (3/4) Epoch 10, batch 1150, loss[loss=0.2026, simple_loss=0.2845, pruned_loss=0.06035, over 16229.00 frames. ], tot_loss[loss=0.1946, simple_loss=0.2782, pruned_loss=0.05555, over 3592561.31 frames. ], batch size: 94, lr: 1.15e-02, grad_scale: 8.0 2023-03-09 00:46:36,071 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=33859.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:47:31,464 INFO [train.py:898] (3/4) Epoch 10, batch 1200, loss[loss=0.2145, simple_loss=0.2993, pruned_loss=0.06483, over 17712.00 frames. ], tot_loss[loss=0.1946, simple_loss=0.2784, pruned_loss=0.05537, over 3602748.99 frames. ], batch size: 70, lr: 1.15e-02, grad_scale: 8.0 2023-03-09 00:47:37,435 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4593, 5.4290, 4.9670, 5.4445, 5.3277, 4.7354, 5.2903, 5.0297], device='cuda:3'), covar=tensor([0.0357, 0.0372, 0.1468, 0.0612, 0.0494, 0.0440, 0.0398, 0.0855], device='cuda:3'), in_proj_covar=tensor([0.0382, 0.0449, 0.0590, 0.0351, 0.0333, 0.0405, 0.0432, 0.0560], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 00:47:42,800 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9862, 3.8458, 5.2476, 4.4996, 3.1320, 2.8544, 4.3557, 5.2970], device='cuda:3'), covar=tensor([0.0682, 0.1392, 0.0070, 0.0325, 0.0894, 0.1080, 0.0350, 0.0206], device='cuda:3'), in_proj_covar=tensor([0.0135, 0.0232, 0.0095, 0.0154, 0.0173, 0.0172, 0.0165, 0.0134], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0002, 0.0003, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-09 00:48:09,187 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=33939.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:48:11,436 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5532, 5.2414, 5.2066, 5.1872, 4.6889, 5.0552, 4.4618, 4.9974], device='cuda:3'), covar=tensor([0.0219, 0.0238, 0.0167, 0.0311, 0.0339, 0.0202, 0.1159, 0.0273], device='cuda:3'), in_proj_covar=tensor([0.0165, 0.0207, 0.0192, 0.0225, 0.0205, 0.0212, 0.0273, 0.0195], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 00:48:13,371 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.163e+02 3.321e+02 3.896e+02 4.849e+02 9.830e+02, threshold=7.793e+02, percent-clipped=3.0 2023-03-09 00:48:29,980 INFO [train.py:898] (3/4) Epoch 10, batch 1250, loss[loss=0.1749, simple_loss=0.251, pruned_loss=0.04946, over 18436.00 frames. ], tot_loss[loss=0.1931, simple_loss=0.2769, pruned_loss=0.05463, over 3608672.41 frames. ], batch size: 43, lr: 1.15e-02, grad_scale: 8.0 2023-03-09 00:48:31,530 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5148, 5.4757, 5.0403, 5.4214, 5.3925, 4.8075, 5.3801, 5.0493], device='cuda:3'), covar=tensor([0.0378, 0.0427, 0.1367, 0.0839, 0.0539, 0.0404, 0.0343, 0.0894], device='cuda:3'), in_proj_covar=tensor([0.0382, 0.0451, 0.0592, 0.0355, 0.0334, 0.0406, 0.0430, 0.0561], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 00:48:45,657 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6440, 3.5151, 1.9758, 4.4153, 3.0683, 4.4977, 2.4927, 3.9951], device='cuda:3'), covar=tensor([0.0544, 0.0745, 0.1412, 0.0360, 0.0790, 0.0241, 0.1037, 0.0361], device='cuda:3'), in_proj_covar=tensor([0.0184, 0.0208, 0.0174, 0.0226, 0.0177, 0.0228, 0.0187, 0.0181], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 00:49:24,458 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34000.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:49:32,599 INFO [train.py:898] (3/4) Epoch 10, batch 1300, loss[loss=0.2016, simple_loss=0.2881, pruned_loss=0.05749, over 17073.00 frames. ], tot_loss[loss=0.1936, simple_loss=0.2772, pruned_loss=0.05499, over 3613382.18 frames. ], batch size: 78, lr: 1.15e-02, grad_scale: 8.0 2023-03-09 00:50:02,583 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34033.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 00:50:14,327 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.927e+02 3.198e+02 3.729e+02 4.411e+02 1.072e+03, threshold=7.459e+02, percent-clipped=4.0 2023-03-09 00:50:15,866 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=34044.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:50:30,795 INFO [train.py:898] (3/4) Epoch 10, batch 1350, loss[loss=0.198, simple_loss=0.2838, pruned_loss=0.05608, over 17973.00 frames. ], tot_loss[loss=0.1934, simple_loss=0.2771, pruned_loss=0.05488, over 3610305.40 frames. ], batch size: 65, lr: 1.15e-02, grad_scale: 8.0 2023-03-09 00:51:27,333 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34105.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:51:29,263 INFO [train.py:898] (3/4) Epoch 10, batch 1400, loss[loss=0.1825, simple_loss=0.2622, pruned_loss=0.05143, over 18553.00 frames. ], tot_loss[loss=0.1932, simple_loss=0.2767, pruned_loss=0.05478, over 3608980.61 frames. ], batch size: 49, lr: 1.15e-02, grad_scale: 8.0 2023-03-09 00:51:43,740 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.52 vs. limit=2.0 2023-03-09 00:51:55,077 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=34128.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:52:11,993 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.227e+02 3.479e+02 4.188e+02 5.259e+02 1.052e+03, threshold=8.376e+02, percent-clipped=5.0 2023-03-09 00:52:15,183 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=34145.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:52:28,409 INFO [train.py:898] (3/4) Epoch 10, batch 1450, loss[loss=0.2116, simple_loss=0.3008, pruned_loss=0.06126, over 18154.00 frames. ], tot_loss[loss=0.1931, simple_loss=0.2764, pruned_loss=0.05496, over 3595745.95 frames. ], batch size: 62, lr: 1.15e-02, grad_scale: 8.0 2023-03-09 00:52:50,586 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=34176.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:53:21,962 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.45 vs. limit=5.0 2023-03-09 00:53:26,188 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34206.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:53:26,947 INFO [train.py:898] (3/4) Epoch 10, batch 1500, loss[loss=0.1859, simple_loss=0.2824, pruned_loss=0.04468, over 18642.00 frames. ], tot_loss[loss=0.1931, simple_loss=0.2764, pruned_loss=0.05495, over 3579402.28 frames. ], batch size: 52, lr: 1.14e-02, grad_scale: 8.0 2023-03-09 00:53:48,910 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.95 vs. limit=2.0 2023-03-09 00:53:52,407 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-09 00:54:08,715 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.894e+02 3.400e+02 4.165e+02 5.471e+02 1.005e+03, threshold=8.329e+02, percent-clipped=3.0 2023-03-09 00:54:24,215 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5618, 6.0482, 5.5836, 5.8797, 5.5666, 5.6348, 6.1431, 6.0640], device='cuda:3'), covar=tensor([0.1124, 0.0617, 0.0356, 0.0689, 0.1438, 0.0562, 0.0521, 0.0569], device='cuda:3'), in_proj_covar=tensor([0.0518, 0.0424, 0.0321, 0.0471, 0.0634, 0.0465, 0.0596, 0.0439], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 00:54:25,067 INFO [train.py:898] (3/4) Epoch 10, batch 1550, loss[loss=0.1683, simple_loss=0.2488, pruned_loss=0.04387, over 18251.00 frames. ], tot_loss[loss=0.1941, simple_loss=0.2777, pruned_loss=0.05527, over 3580444.11 frames. ], batch size: 45, lr: 1.14e-02, grad_scale: 8.0 2023-03-09 00:55:09,437 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34295.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:55:23,418 INFO [train.py:898] (3/4) Epoch 10, batch 1600, loss[loss=0.1659, simple_loss=0.2424, pruned_loss=0.04469, over 18455.00 frames. ], tot_loss[loss=0.1933, simple_loss=0.2766, pruned_loss=0.055, over 3568691.86 frames. ], batch size: 43, lr: 1.14e-02, grad_scale: 8.0 2023-03-09 00:55:53,843 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=34333.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:55:56,565 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.49 vs. limit=2.0 2023-03-09 00:55:57,649 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-09 00:56:05,596 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.752e+02 3.318e+02 3.747e+02 4.447e+02 1.015e+03, threshold=7.495e+02, percent-clipped=2.0 2023-03-09 00:56:21,843 INFO [train.py:898] (3/4) Epoch 10, batch 1650, loss[loss=0.1865, simple_loss=0.2631, pruned_loss=0.055, over 18491.00 frames. ], tot_loss[loss=0.1943, simple_loss=0.2777, pruned_loss=0.05546, over 3577560.60 frames. ], batch size: 44, lr: 1.14e-02, grad_scale: 8.0 2023-03-09 00:56:38,961 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6836, 2.3528, 2.3976, 2.5690, 3.0736, 3.9566, 3.6533, 3.2228], device='cuda:3'), covar=tensor([0.0929, 0.1531, 0.2029, 0.1248, 0.1409, 0.0195, 0.0411, 0.0396], device='cuda:3'), in_proj_covar=tensor([0.0227, 0.0284, 0.0293, 0.0240, 0.0352, 0.0175, 0.0249, 0.0193], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 00:56:49,675 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=34381.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:57:11,875 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34400.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:57:20,097 INFO [train.py:898] (3/4) Epoch 10, batch 1700, loss[loss=0.181, simple_loss=0.2643, pruned_loss=0.04888, over 18408.00 frames. ], tot_loss[loss=0.194, simple_loss=0.2773, pruned_loss=0.0554, over 3565328.43 frames. ], batch size: 48, lr: 1.14e-02, grad_scale: 4.0 2023-03-09 00:58:01,655 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8699, 3.7273, 3.5592, 3.0900, 3.4990, 2.8588, 2.7291, 3.8147], device='cuda:3'), covar=tensor([0.0043, 0.0095, 0.0089, 0.0133, 0.0081, 0.0167, 0.0189, 0.0050], device='cuda:3'), in_proj_covar=tensor([0.0086, 0.0112, 0.0101, 0.0148, 0.0101, 0.0146, 0.0152, 0.0083], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0001, 0.0002, 0.0001, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 00:58:03,526 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.339e+02 3.311e+02 3.796e+02 4.465e+02 1.114e+03, threshold=7.593e+02, percent-clipped=2.0 2023-03-09 00:58:10,358 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8511, 3.9606, 2.4569, 4.0102, 4.8118, 2.5065, 3.5396, 3.5024], device='cuda:3'), covar=tensor([0.0084, 0.0945, 0.1548, 0.0506, 0.0055, 0.1193, 0.0697, 0.0861], device='cuda:3'), in_proj_covar=tensor([0.0108, 0.0218, 0.0186, 0.0185, 0.0085, 0.0169, 0.0197, 0.0201], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 00:58:13,714 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8443, 4.4334, 4.6443, 3.2854, 3.5958, 3.5782, 2.5529, 2.1469], device='cuda:3'), covar=tensor([0.0188, 0.0132, 0.0051, 0.0298, 0.0326, 0.0176, 0.0719, 0.0931], device='cuda:3'), in_proj_covar=tensor([0.0057, 0.0047, 0.0045, 0.0057, 0.0077, 0.0054, 0.0070, 0.0076], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 00:58:18,862 INFO [train.py:898] (3/4) Epoch 10, batch 1750, loss[loss=0.2023, simple_loss=0.286, pruned_loss=0.05931, over 17041.00 frames. ], tot_loss[loss=0.1936, simple_loss=0.2773, pruned_loss=0.05497, over 3571740.45 frames. ], batch size: 78, lr: 1.14e-02, grad_scale: 4.0 2023-03-09 00:58:29,640 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1756, 5.1656, 4.7305, 5.0868, 5.0948, 4.5939, 5.0071, 4.8091], device='cuda:3'), covar=tensor([0.0432, 0.0447, 0.1491, 0.0794, 0.0580, 0.0405, 0.0424, 0.0999], device='cuda:3'), in_proj_covar=tensor([0.0386, 0.0445, 0.0591, 0.0348, 0.0334, 0.0406, 0.0431, 0.0557], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 00:59:11,440 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34501.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 00:59:18,088 INFO [train.py:898] (3/4) Epoch 10, batch 1800, loss[loss=0.1928, simple_loss=0.2833, pruned_loss=0.05111, over 18361.00 frames. ], tot_loss[loss=0.1938, simple_loss=0.2777, pruned_loss=0.05493, over 3579173.07 frames. ], batch size: 55, lr: 1.14e-02, grad_scale: 4.0 2023-03-09 01:00:01,443 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.848e+02 3.049e+02 3.588e+02 4.503e+02 1.027e+03, threshold=7.176e+02, percent-clipped=5.0 2023-03-09 01:00:15,130 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5224, 3.6308, 5.1161, 4.2077, 3.0919, 2.9598, 4.2751, 5.2183], device='cuda:3'), covar=tensor([0.0912, 0.1598, 0.0089, 0.0362, 0.0963, 0.1095, 0.0378, 0.0131], device='cuda:3'), in_proj_covar=tensor([0.0135, 0.0237, 0.0097, 0.0157, 0.0174, 0.0172, 0.0167, 0.0136], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-09 01:00:16,829 INFO [train.py:898] (3/4) Epoch 10, batch 1850, loss[loss=0.2028, simple_loss=0.2884, pruned_loss=0.05863, over 17733.00 frames. ], tot_loss[loss=0.1931, simple_loss=0.2773, pruned_loss=0.05451, over 3592147.95 frames. ], batch size: 70, lr: 1.14e-02, grad_scale: 4.0 2023-03-09 01:00:17,938 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.98 vs. limit=2.0 2023-03-09 01:00:42,497 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-09 01:00:48,635 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.00 vs. limit=5.0 2023-03-09 01:00:59,529 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=34593.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:01:01,750 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=34595.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:01:15,581 INFO [train.py:898] (3/4) Epoch 10, batch 1900, loss[loss=0.1828, simple_loss=0.2515, pruned_loss=0.05708, over 18435.00 frames. ], tot_loss[loss=0.193, simple_loss=0.277, pruned_loss=0.05446, over 3596137.42 frames. ], batch size: 43, lr: 1.14e-02, grad_scale: 4.0 2023-03-09 01:01:33,785 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=34622.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:01:38,212 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=34626.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:01:58,495 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=34643.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:01:59,385 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.282e+02 3.244e+02 3.774e+02 4.780e+02 1.001e+03, threshold=7.549e+02, percent-clipped=4.0 2023-03-09 01:02:08,738 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2051, 5.0093, 5.3400, 5.2766, 5.1823, 5.8773, 5.4937, 5.2548], device='cuda:3'), covar=tensor([0.0899, 0.0661, 0.0682, 0.0655, 0.1431, 0.0749, 0.0566, 0.1544], device='cuda:3'), in_proj_covar=tensor([0.0296, 0.0223, 0.0241, 0.0237, 0.0275, 0.0336, 0.0222, 0.0328], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-09 01:02:11,238 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34654.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:02:14,314 INFO [train.py:898] (3/4) Epoch 10, batch 1950, loss[loss=0.2036, simple_loss=0.2908, pruned_loss=0.05818, over 18481.00 frames. ], tot_loss[loss=0.1926, simple_loss=0.2768, pruned_loss=0.05419, over 3604585.42 frames. ], batch size: 59, lr: 1.14e-02, grad_scale: 4.0 2023-03-09 01:02:44,912 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34683.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 01:02:49,853 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34687.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:03:05,167 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=34700.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:03:12,471 INFO [train.py:898] (3/4) Epoch 10, batch 2000, loss[loss=0.1782, simple_loss=0.261, pruned_loss=0.04765, over 18286.00 frames. ], tot_loss[loss=0.1929, simple_loss=0.2772, pruned_loss=0.05434, over 3600805.17 frames. ], batch size: 49, lr: 1.14e-02, grad_scale: 8.0 2023-03-09 01:03:16,345 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=34710.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 01:03:25,189 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9094, 3.7659, 3.6514, 3.2181, 3.5696, 2.9359, 2.9004, 3.6888], device='cuda:3'), covar=tensor([0.0027, 0.0069, 0.0057, 0.0102, 0.0071, 0.0137, 0.0161, 0.0072], device='cuda:3'), in_proj_covar=tensor([0.0087, 0.0112, 0.0101, 0.0149, 0.0103, 0.0146, 0.0153, 0.0085], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0001, 0.0002, 0.0001, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 01:03:56,810 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.230e+02 3.366e+02 4.037e+02 4.949e+02 9.171e+02, threshold=8.073e+02, percent-clipped=4.0 2023-03-09 01:04:01,515 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=34748.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:04:11,468 INFO [train.py:898] (3/4) Epoch 10, batch 2050, loss[loss=0.2032, simple_loss=0.2911, pruned_loss=0.05765, over 18323.00 frames. ], tot_loss[loss=0.1926, simple_loss=0.2765, pruned_loss=0.05435, over 3596431.11 frames. ], batch size: 54, lr: 1.14e-02, grad_scale: 8.0 2023-03-09 01:04:28,691 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34771.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 01:04:43,865 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7792, 3.2150, 4.3304, 4.3141, 2.7260, 4.8939, 4.0581, 3.2049], device='cuda:3'), covar=tensor([0.0388, 0.1207, 0.0239, 0.0237, 0.1544, 0.0115, 0.0444, 0.0883], device='cuda:3'), in_proj_covar=tensor([0.0183, 0.0214, 0.0146, 0.0139, 0.0211, 0.0179, 0.0201, 0.0191], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 01:04:52,114 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4977, 6.0799, 5.5459, 5.8319, 5.5265, 5.5667, 6.1066, 6.0069], device='cuda:3'), covar=tensor([0.1154, 0.0607, 0.0407, 0.0686, 0.1409, 0.0642, 0.0469, 0.0562], device='cuda:3'), in_proj_covar=tensor([0.0523, 0.0420, 0.0323, 0.0466, 0.0634, 0.0466, 0.0595, 0.0440], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 01:05:04,275 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=34801.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:05:10,833 INFO [train.py:898] (3/4) Epoch 10, batch 2100, loss[loss=0.1727, simple_loss=0.2614, pruned_loss=0.04196, over 18558.00 frames. ], tot_loss[loss=0.1924, simple_loss=0.2762, pruned_loss=0.05432, over 3594767.45 frames. ], batch size: 49, lr: 1.14e-02, grad_scale: 8.0 2023-03-09 01:05:54,598 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.339e+02 3.283e+02 3.862e+02 4.779e+02 1.024e+03, threshold=7.723e+02, percent-clipped=2.0 2023-03-09 01:06:01,102 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=34849.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:06:09,704 INFO [train.py:898] (3/4) Epoch 10, batch 2150, loss[loss=0.2163, simple_loss=0.2971, pruned_loss=0.06775, over 16424.00 frames. ], tot_loss[loss=0.1924, simple_loss=0.2762, pruned_loss=0.05432, over 3593237.02 frames. ], batch size: 94, lr: 1.13e-02, grad_scale: 8.0 2023-03-09 01:06:12,715 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-09 01:06:53,597 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8400, 4.4167, 4.5776, 3.3910, 3.5568, 3.7999, 2.6729, 2.1269], device='cuda:3'), covar=tensor([0.0207, 0.0128, 0.0069, 0.0257, 0.0331, 0.0158, 0.0706, 0.0940], device='cuda:3'), in_proj_covar=tensor([0.0059, 0.0046, 0.0046, 0.0058, 0.0080, 0.0055, 0.0073, 0.0077], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 01:06:56,859 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.65 vs. limit=5.0 2023-03-09 01:07:07,667 INFO [train.py:898] (3/4) Epoch 10, batch 2200, loss[loss=0.2193, simple_loss=0.3016, pruned_loss=0.06854, over 17752.00 frames. ], tot_loss[loss=0.1931, simple_loss=0.2767, pruned_loss=0.05471, over 3591191.35 frames. ], batch size: 70, lr: 1.13e-02, grad_scale: 8.0 2023-03-09 01:07:50,019 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.072e+02 3.371e+02 4.058e+02 4.999e+02 1.282e+03, threshold=8.115e+02, percent-clipped=5.0 2023-03-09 01:07:57,129 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34949.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:08:05,877 INFO [train.py:898] (3/4) Epoch 10, batch 2250, loss[loss=0.1726, simple_loss=0.2636, pruned_loss=0.04078, over 18487.00 frames. ], tot_loss[loss=0.1937, simple_loss=0.2772, pruned_loss=0.05507, over 3587593.78 frames. ], batch size: 51, lr: 1.13e-02, grad_scale: 8.0 2023-03-09 01:08:06,908 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.95 vs. limit=2.0 2023-03-09 01:08:10,679 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0389, 4.8336, 5.1328, 4.8232, 4.6420, 4.9880, 5.2163, 5.1692], device='cuda:3'), covar=tensor([0.0063, 0.0108, 0.0082, 0.0094, 0.0089, 0.0104, 0.0079, 0.0096], device='cuda:3'), in_proj_covar=tensor([0.0077, 0.0055, 0.0057, 0.0071, 0.0060, 0.0083, 0.0070, 0.0068], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 01:08:29,631 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34978.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 01:08:34,769 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34982.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:09:03,351 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.66 vs. limit=5.0 2023-03-09 01:09:05,058 INFO [train.py:898] (3/4) Epoch 10, batch 2300, loss[loss=0.1662, simple_loss=0.245, pruned_loss=0.04376, over 18501.00 frames. ], tot_loss[loss=0.1931, simple_loss=0.2766, pruned_loss=0.05476, over 3580620.86 frames. ], batch size: 44, lr: 1.13e-02, grad_scale: 8.0 2023-03-09 01:09:09,898 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0959, 5.0690, 4.6645, 5.0412, 5.0187, 4.4507, 4.9437, 4.6746], device='cuda:3'), covar=tensor([0.0363, 0.0399, 0.1256, 0.0630, 0.0521, 0.0410, 0.0362, 0.0831], device='cuda:3'), in_proj_covar=tensor([0.0392, 0.0449, 0.0593, 0.0352, 0.0333, 0.0409, 0.0433, 0.0563], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 01:09:30,777 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3471, 2.6512, 3.6988, 3.6369, 2.4101, 3.9818, 3.6493, 2.5039], device='cuda:3'), covar=tensor([0.0410, 0.1178, 0.0232, 0.0234, 0.1348, 0.0210, 0.0385, 0.0931], device='cuda:3'), in_proj_covar=tensor([0.0181, 0.0211, 0.0142, 0.0136, 0.0207, 0.0175, 0.0197, 0.0186], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 01:09:37,661 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.47 vs. limit=2.0 2023-03-09 01:09:48,164 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.223e+02 3.414e+02 4.118e+02 5.165e+02 1.398e+03, threshold=8.236e+02, percent-clipped=7.0 2023-03-09 01:09:50,029 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.44 vs. limit=2.0 2023-03-09 01:10:04,303 INFO [train.py:898] (3/4) Epoch 10, batch 2350, loss[loss=0.1895, simple_loss=0.2701, pruned_loss=0.05445, over 18263.00 frames. ], tot_loss[loss=0.1921, simple_loss=0.2755, pruned_loss=0.05428, over 3588403.38 frames. ], batch size: 47, lr: 1.13e-02, grad_scale: 8.0 2023-03-09 01:10:14,491 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=35066.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 01:10:45,321 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=35092.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:11:03,091 INFO [train.py:898] (3/4) Epoch 10, batch 2400, loss[loss=0.1784, simple_loss=0.2516, pruned_loss=0.05258, over 18473.00 frames. ], tot_loss[loss=0.1926, simple_loss=0.276, pruned_loss=0.0546, over 3588187.75 frames. ], batch size: 43, lr: 1.13e-02, grad_scale: 8.0 2023-03-09 01:11:44,816 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-09 01:11:46,351 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.146e+02 3.097e+02 3.497e+02 4.481e+02 9.242e+02, threshold=6.993e+02, percent-clipped=1.0 2023-03-09 01:11:57,294 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=35153.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:12:02,047 INFO [train.py:898] (3/4) Epoch 10, batch 2450, loss[loss=0.1823, simple_loss=0.2618, pruned_loss=0.05144, over 18418.00 frames. ], tot_loss[loss=0.192, simple_loss=0.2754, pruned_loss=0.05434, over 3581559.43 frames. ], batch size: 43, lr: 1.13e-02, grad_scale: 8.0 2023-03-09 01:12:31,670 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.47 vs. limit=2.0 2023-03-09 01:12:37,085 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-09 01:13:00,746 INFO [train.py:898] (3/4) Epoch 10, batch 2500, loss[loss=0.1778, simple_loss=0.2577, pruned_loss=0.0489, over 18571.00 frames. ], tot_loss[loss=0.1923, simple_loss=0.2757, pruned_loss=0.05445, over 3585250.18 frames. ], batch size: 45, lr: 1.13e-02, grad_scale: 8.0 2023-03-09 01:13:05,629 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3834, 2.6013, 2.5439, 2.6064, 3.4132, 3.2811, 2.9082, 2.6453], device='cuda:3'), covar=tensor([0.0158, 0.0312, 0.0615, 0.0454, 0.0152, 0.0223, 0.0398, 0.0387], device='cuda:3'), in_proj_covar=tensor([0.0118, 0.0106, 0.0153, 0.0139, 0.0101, 0.0087, 0.0136, 0.0131], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 01:13:43,834 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.285e+02 3.181e+02 4.060e+02 4.793e+02 9.479e+02, threshold=8.119e+02, percent-clipped=6.0 2023-03-09 01:13:50,179 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=35249.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:13:58,867 INFO [train.py:898] (3/4) Epoch 10, batch 2550, loss[loss=0.2286, simple_loss=0.2992, pruned_loss=0.07899, over 12120.00 frames. ], tot_loss[loss=0.1931, simple_loss=0.2764, pruned_loss=0.05491, over 3586276.32 frames. ], batch size: 130, lr: 1.13e-02, grad_scale: 8.0 2023-03-09 01:14:23,436 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=35278.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 01:14:28,027 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=35282.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:14:45,829 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=35297.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:14:57,513 INFO [train.py:898] (3/4) Epoch 10, batch 2600, loss[loss=0.1993, simple_loss=0.2793, pruned_loss=0.05968, over 18340.00 frames. ], tot_loss[loss=0.1931, simple_loss=0.2767, pruned_loss=0.05473, over 3592394.69 frames. ], batch size: 55, lr: 1.13e-02, grad_scale: 8.0 2023-03-09 01:15:20,682 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=35326.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:15:25,298 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=35330.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:15:40,655 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.265e+02 3.407e+02 3.893e+02 4.729e+02 1.117e+03, threshold=7.786e+02, percent-clipped=1.0 2023-03-09 01:15:42,232 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=35345.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 01:15:43,646 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.13 vs. limit=5.0 2023-03-09 01:15:55,975 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4831, 2.0383, 2.4971, 2.5909, 3.1660, 4.7525, 4.2488, 3.8760], device='cuda:3'), covar=tensor([0.1173, 0.1874, 0.2133, 0.1318, 0.1652, 0.0116, 0.0364, 0.0396], device='cuda:3'), in_proj_covar=tensor([0.0232, 0.0290, 0.0301, 0.0243, 0.0356, 0.0180, 0.0253, 0.0198], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 01:15:56,563 INFO [train.py:898] (3/4) Epoch 10, batch 2650, loss[loss=0.2098, simple_loss=0.2939, pruned_loss=0.06284, over 17136.00 frames. ], tot_loss[loss=0.1928, simple_loss=0.2766, pruned_loss=0.05453, over 3587073.44 frames. ], batch size: 78, lr: 1.13e-02, grad_scale: 8.0 2023-03-09 01:16:08,244 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=35366.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 01:16:17,116 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=35374.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:16:54,615 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=35406.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 01:16:55,350 INFO [train.py:898] (3/4) Epoch 10, batch 2700, loss[loss=0.1912, simple_loss=0.2774, pruned_loss=0.0525, over 18488.00 frames. ], tot_loss[loss=0.1937, simple_loss=0.2777, pruned_loss=0.05489, over 3582793.40 frames. ], batch size: 47, lr: 1.13e-02, grad_scale: 8.0 2023-03-09 01:16:55,884 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5217, 2.0320, 2.6082, 2.5356, 3.1377, 4.8129, 4.2704, 3.9741], device='cuda:3'), covar=tensor([0.1191, 0.1956, 0.2193, 0.1357, 0.1774, 0.0113, 0.0409, 0.0400], device='cuda:3'), in_proj_covar=tensor([0.0235, 0.0293, 0.0304, 0.0246, 0.0359, 0.0182, 0.0257, 0.0200], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 01:17:03,987 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=35414.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 01:17:25,340 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.91 vs. limit=5.0 2023-03-09 01:17:28,447 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=35435.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:17:37,962 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.429e+02 3.322e+02 4.284e+02 5.081e+02 8.413e+02, threshold=8.569e+02, percent-clipped=3.0 2023-03-09 01:17:42,698 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=35448.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:17:53,627 INFO [train.py:898] (3/4) Epoch 10, batch 2750, loss[loss=0.1684, simple_loss=0.2526, pruned_loss=0.04208, over 18399.00 frames. ], tot_loss[loss=0.1941, simple_loss=0.2779, pruned_loss=0.05514, over 3577010.19 frames. ], batch size: 48, lr: 1.12e-02, grad_scale: 8.0 2023-03-09 01:18:00,378 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2745, 2.7482, 2.3754, 2.6923, 3.4445, 3.2126, 2.8258, 2.7099], device='cuda:3'), covar=tensor([0.0157, 0.0255, 0.0662, 0.0386, 0.0159, 0.0173, 0.0440, 0.0422], device='cuda:3'), in_proj_covar=tensor([0.0113, 0.0103, 0.0149, 0.0134, 0.0098, 0.0084, 0.0131, 0.0126], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 01:18:09,211 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-09 01:18:33,970 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.18 vs. limit=5.0 2023-03-09 01:18:51,854 INFO [train.py:898] (3/4) Epoch 10, batch 2800, loss[loss=0.2225, simple_loss=0.2936, pruned_loss=0.07566, over 12905.00 frames. ], tot_loss[loss=0.1947, simple_loss=0.2783, pruned_loss=0.05561, over 3569143.09 frames. ], batch size: 130, lr: 1.12e-02, grad_scale: 8.0 2023-03-09 01:19:18,853 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=35530.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:19:34,531 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.967e+02 3.480e+02 4.448e+02 5.529e+02 1.797e+03, threshold=8.895e+02, percent-clipped=6.0 2023-03-09 01:19:49,761 INFO [train.py:898] (3/4) Epoch 10, batch 2850, loss[loss=0.2227, simple_loss=0.3014, pruned_loss=0.07206, over 17980.00 frames. ], tot_loss[loss=0.1944, simple_loss=0.2778, pruned_loss=0.05548, over 3569916.09 frames. ], batch size: 65, lr: 1.12e-02, grad_scale: 8.0 2023-03-09 01:20:26,150 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-09 01:20:29,932 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=35591.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:20:47,590 INFO [train.py:898] (3/4) Epoch 10, batch 2900, loss[loss=0.2134, simple_loss=0.2977, pruned_loss=0.06455, over 15946.00 frames. ], tot_loss[loss=0.1944, simple_loss=0.278, pruned_loss=0.05537, over 3570095.64 frames. ], batch size: 94, lr: 1.12e-02, grad_scale: 8.0 2023-03-09 01:20:59,316 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8071, 5.3385, 5.3385, 5.3565, 4.8413, 5.2648, 4.6570, 5.2398], device='cuda:3'), covar=tensor([0.0216, 0.0278, 0.0191, 0.0286, 0.0392, 0.0212, 0.1119, 0.0256], device='cuda:3'), in_proj_covar=tensor([0.0168, 0.0210, 0.0198, 0.0232, 0.0208, 0.0216, 0.0277, 0.0200], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 01:21:32,209 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.306e+02 3.157e+02 3.765e+02 4.717e+02 8.382e+02, threshold=7.531e+02, percent-clipped=0.0 2023-03-09 01:21:47,244 INFO [train.py:898] (3/4) Epoch 10, batch 2950, loss[loss=0.1614, simple_loss=0.2381, pruned_loss=0.0423, over 18388.00 frames. ], tot_loss[loss=0.1928, simple_loss=0.2765, pruned_loss=0.0545, over 3580119.60 frames. ], batch size: 42, lr: 1.12e-02, grad_scale: 8.0 2023-03-09 01:22:31,448 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2103, 5.1273, 5.4308, 5.3268, 5.1670, 5.9930, 5.6240, 5.2479], device='cuda:3'), covar=tensor([0.1082, 0.0592, 0.0710, 0.0717, 0.1312, 0.0711, 0.0592, 0.1751], device='cuda:3'), in_proj_covar=tensor([0.0303, 0.0232, 0.0248, 0.0246, 0.0285, 0.0346, 0.0228, 0.0339], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-09 01:22:39,384 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=35701.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 01:22:46,134 INFO [train.py:898] (3/4) Epoch 10, batch 3000, loss[loss=0.1974, simple_loss=0.2784, pruned_loss=0.05813, over 16226.00 frames. ], tot_loss[loss=0.1927, simple_loss=0.2766, pruned_loss=0.0544, over 3588924.92 frames. ], batch size: 94, lr: 1.12e-02, grad_scale: 8.0 2023-03-09 01:22:46,135 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 01:22:58,182 INFO [train.py:932] (3/4) Epoch 10, validation: loss=0.1597, simple_loss=0.2619, pruned_loss=0.0287, over 944034.00 frames. 2023-03-09 01:22:58,183 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 01:23:08,097 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6772, 2.9175, 4.1982, 3.8755, 2.5158, 4.6440, 4.0471, 2.7763], device='cuda:3'), covar=tensor([0.0385, 0.1226, 0.0241, 0.0273, 0.1558, 0.0170, 0.0323, 0.0979], device='cuda:3'), in_proj_covar=tensor([0.0179, 0.0212, 0.0146, 0.0138, 0.0210, 0.0179, 0.0199, 0.0189], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 01:23:25,277 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=35730.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:23:40,662 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.222e+02 3.466e+02 3.980e+02 4.724e+02 8.688e+02, threshold=7.961e+02, percent-clipped=2.0 2023-03-09 01:23:45,689 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=35748.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:23:56,263 INFO [train.py:898] (3/4) Epoch 10, batch 3050, loss[loss=0.2144, simple_loss=0.2914, pruned_loss=0.06867, over 18295.00 frames. ], tot_loss[loss=0.1928, simple_loss=0.2767, pruned_loss=0.05445, over 3586387.95 frames. ], batch size: 57, lr: 1.12e-02, grad_scale: 8.0 2023-03-09 01:24:13,447 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=35771.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:24:14,696 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8020, 3.7289, 3.5697, 3.2155, 3.5798, 2.7437, 2.8970, 3.7439], device='cuda:3'), covar=tensor([0.0031, 0.0062, 0.0068, 0.0096, 0.0063, 0.0156, 0.0157, 0.0053], device='cuda:3'), in_proj_covar=tensor([0.0090, 0.0114, 0.0102, 0.0148, 0.0103, 0.0145, 0.0153, 0.0087], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0001, 0.0002, 0.0001, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 01:24:31,308 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8233, 4.4262, 4.5538, 3.1980, 3.7192, 3.7408, 2.4948, 2.3411], device='cuda:3'), covar=tensor([0.0176, 0.0157, 0.0067, 0.0307, 0.0312, 0.0196, 0.0797, 0.0891], device='cuda:3'), in_proj_covar=tensor([0.0057, 0.0047, 0.0046, 0.0058, 0.0078, 0.0055, 0.0071, 0.0077], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 01:24:40,073 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0781, 4.9602, 5.1344, 4.8169, 4.8823, 4.9605, 5.2999, 5.2242], device='cuda:3'), covar=tensor([0.0071, 0.0087, 0.0061, 0.0099, 0.0073, 0.0114, 0.0057, 0.0096], device='cuda:3'), in_proj_covar=tensor([0.0079, 0.0056, 0.0058, 0.0074, 0.0062, 0.0085, 0.0070, 0.0071], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 01:24:42,262 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=35796.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:24:46,224 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-09 01:24:55,229 INFO [train.py:898] (3/4) Epoch 10, batch 3100, loss[loss=0.1991, simple_loss=0.2937, pruned_loss=0.05222, over 18349.00 frames. ], tot_loss[loss=0.1926, simple_loss=0.2764, pruned_loss=0.05438, over 3583678.41 frames. ], batch size: 55, lr: 1.12e-02, grad_scale: 8.0 2023-03-09 01:25:00,718 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=35811.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:25:06,015 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4848, 2.1023, 2.7537, 2.6099, 3.4727, 5.1271, 4.5412, 4.2203], device='cuda:3'), covar=tensor([0.1228, 0.1911, 0.2298, 0.1357, 0.1578, 0.0085, 0.0342, 0.0369], device='cuda:3'), in_proj_covar=tensor([0.0232, 0.0292, 0.0300, 0.0243, 0.0354, 0.0182, 0.0253, 0.0198], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 01:25:26,976 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=35832.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:25:39,853 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.088e+02 3.519e+02 4.020e+02 4.842e+02 1.442e+03, threshold=8.041e+02, percent-clipped=6.0 2023-03-09 01:25:50,278 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.3903, 4.9022, 4.9230, 5.0250, 4.5168, 4.8694, 3.8235, 4.8246], device='cuda:3'), covar=tensor([0.0323, 0.0491, 0.0334, 0.0373, 0.0459, 0.0320, 0.1966, 0.0413], device='cuda:3'), in_proj_covar=tensor([0.0170, 0.0212, 0.0199, 0.0234, 0.0210, 0.0220, 0.0275, 0.0203], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 01:25:54,160 INFO [train.py:898] (3/4) Epoch 10, batch 3150, loss[loss=0.1975, simple_loss=0.2833, pruned_loss=0.05585, over 18278.00 frames. ], tot_loss[loss=0.1924, simple_loss=0.2763, pruned_loss=0.05422, over 3585435.84 frames. ], batch size: 54, lr: 1.12e-02, grad_scale: 8.0 2023-03-09 01:26:12,927 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=35872.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:26:28,863 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=35886.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:26:52,115 INFO [train.py:898] (3/4) Epoch 10, batch 3200, loss[loss=0.1903, simple_loss=0.2776, pruned_loss=0.05148, over 18303.00 frames. ], tot_loss[loss=0.1933, simple_loss=0.2771, pruned_loss=0.05478, over 3579939.60 frames. ], batch size: 54, lr: 1.12e-02, grad_scale: 8.0 2023-03-09 01:27:08,855 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7989, 4.0962, 4.0667, 4.0926, 3.8063, 3.9883, 3.6665, 4.0086], device='cuda:3'), covar=tensor([0.0265, 0.0333, 0.0286, 0.0441, 0.0366, 0.0286, 0.0969, 0.0341], device='cuda:3'), in_proj_covar=tensor([0.0171, 0.0211, 0.0200, 0.0236, 0.0210, 0.0221, 0.0277, 0.0203], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 01:27:35,148 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.239e+02 3.585e+02 4.251e+02 5.261e+02 1.507e+03, threshold=8.503e+02, percent-clipped=6.0 2023-03-09 01:27:49,557 INFO [train.py:898] (3/4) Epoch 10, batch 3250, loss[loss=0.1496, simple_loss=0.231, pruned_loss=0.03412, over 18249.00 frames. ], tot_loss[loss=0.1931, simple_loss=0.2764, pruned_loss=0.05486, over 3570748.75 frames. ], batch size: 45, lr: 1.12e-02, grad_scale: 8.0 2023-03-09 01:28:46,235 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=36001.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 01:28:52,533 INFO [train.py:898] (3/4) Epoch 10, batch 3300, loss[loss=0.1674, simple_loss=0.2481, pruned_loss=0.04334, over 18087.00 frames. ], tot_loss[loss=0.193, simple_loss=0.2766, pruned_loss=0.05473, over 3585610.70 frames. ], batch size: 40, lr: 1.12e-02, grad_scale: 8.0 2023-03-09 01:29:19,508 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=36030.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:29:35,214 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.117e+02 3.405e+02 4.202e+02 5.092e+02 8.448e+02, threshold=8.405e+02, percent-clipped=0.0 2023-03-09 01:29:40,873 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=36049.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 01:29:49,539 INFO [train.py:898] (3/4) Epoch 10, batch 3350, loss[loss=0.2344, simple_loss=0.3108, pruned_loss=0.07898, over 18367.00 frames. ], tot_loss[loss=0.1942, simple_loss=0.2777, pruned_loss=0.05535, over 3572336.15 frames. ], batch size: 56, lr: 1.12e-02, grad_scale: 8.0 2023-03-09 01:30:13,378 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=36078.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:30:38,835 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.47 vs. limit=2.0 2023-03-09 01:30:47,896 INFO [train.py:898] (3/4) Epoch 10, batch 3400, loss[loss=0.1816, simple_loss=0.2581, pruned_loss=0.05254, over 17296.00 frames. ], tot_loss[loss=0.1937, simple_loss=0.2775, pruned_loss=0.05493, over 3575539.57 frames. ], batch size: 38, lr: 1.11e-02, grad_scale: 8.0 2023-03-09 01:31:10,803 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=36127.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:31:31,614 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.609e+02 3.400e+02 4.178e+02 5.234e+02 8.202e+02, threshold=8.355e+02, percent-clipped=0.0 2023-03-09 01:31:46,927 INFO [train.py:898] (3/4) Epoch 10, batch 3450, loss[loss=0.2349, simple_loss=0.3167, pruned_loss=0.07653, over 18323.00 frames. ], tot_loss[loss=0.1936, simple_loss=0.277, pruned_loss=0.05511, over 3564858.96 frames. ], batch size: 54, lr: 1.11e-02, grad_scale: 8.0 2023-03-09 01:31:58,394 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=36167.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:32:00,500 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2786, 5.1172, 5.4391, 5.3621, 5.1159, 5.9001, 5.6319, 5.2380], device='cuda:3'), covar=tensor([0.0852, 0.0635, 0.0589, 0.0585, 0.1365, 0.0795, 0.0500, 0.1673], device='cuda:3'), in_proj_covar=tensor([0.0304, 0.0235, 0.0248, 0.0248, 0.0287, 0.0353, 0.0229, 0.0341], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0004, 0.0002, 0.0003], device='cuda:3') 2023-03-09 01:32:10,300 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-09 01:32:14,204 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36181.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:32:20,847 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=36186.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:32:45,290 INFO [train.py:898] (3/4) Epoch 10, batch 3500, loss[loss=0.1897, simple_loss=0.2827, pruned_loss=0.04838, over 18300.00 frames. ], tot_loss[loss=0.1928, simple_loss=0.2762, pruned_loss=0.05468, over 3576159.10 frames. ], batch size: 54, lr: 1.11e-02, grad_scale: 8.0 2023-03-09 01:32:51,413 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6209, 3.4386, 1.8296, 4.3051, 3.1007, 4.2537, 1.9731, 3.7957], device='cuda:3'), covar=tensor([0.0440, 0.0598, 0.1341, 0.0376, 0.0708, 0.0276, 0.1319, 0.0381], device='cuda:3'), in_proj_covar=tensor([0.0189, 0.0209, 0.0175, 0.0233, 0.0181, 0.0234, 0.0190, 0.0183], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 01:33:16,083 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=36234.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:33:17,889 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.31 vs. limit=5.0 2023-03-09 01:33:25,126 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36242.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:33:26,936 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.239e+02 3.308e+02 3.805e+02 4.814e+02 1.268e+03, threshold=7.610e+02, percent-clipped=2.0 2023-03-09 01:33:28,442 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0293, 3.3063, 2.5804, 3.3288, 4.0762, 2.5141, 3.2920, 3.3601], device='cuda:3'), covar=tensor([0.0124, 0.1028, 0.1165, 0.0511, 0.0072, 0.1041, 0.0593, 0.0632], device='cuda:3'), in_proj_covar=tensor([0.0111, 0.0226, 0.0190, 0.0185, 0.0087, 0.0174, 0.0200, 0.0202], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 01:33:41,581 INFO [train.py:898] (3/4) Epoch 10, batch 3550, loss[loss=0.1875, simple_loss=0.274, pruned_loss=0.05057, over 18494.00 frames. ], tot_loss[loss=0.1925, simple_loss=0.2763, pruned_loss=0.05439, over 3573100.48 frames. ], batch size: 51, lr: 1.11e-02, grad_scale: 8.0 2023-03-09 01:34:36,260 INFO [train.py:898] (3/4) Epoch 10, batch 3600, loss[loss=0.2151, simple_loss=0.2976, pruned_loss=0.06628, over 18190.00 frames. ], tot_loss[loss=0.1919, simple_loss=0.2757, pruned_loss=0.0541, over 3579014.37 frames. ], batch size: 62, lr: 1.11e-02, grad_scale: 8.0 2023-03-09 01:35:05,968 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9343, 3.8323, 4.0253, 3.7752, 3.8193, 3.8244, 3.9896, 3.9559], device='cuda:3'), covar=tensor([0.0080, 0.0100, 0.0081, 0.0091, 0.0071, 0.0126, 0.0078, 0.0101], device='cuda:3'), in_proj_covar=tensor([0.0080, 0.0056, 0.0059, 0.0074, 0.0062, 0.0087, 0.0071, 0.0072], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 01:35:41,695 INFO [train.py:898] (3/4) Epoch 11, batch 0, loss[loss=0.2032, simple_loss=0.2892, pruned_loss=0.05865, over 18615.00 frames. ], tot_loss[loss=0.2032, simple_loss=0.2892, pruned_loss=0.05865, over 18615.00 frames. ], batch size: 52, lr: 1.06e-02, grad_scale: 8.0 2023-03-09 01:35:41,695 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 01:35:53,393 INFO [train.py:932] (3/4) Epoch 11, validation: loss=0.1597, simple_loss=0.2625, pruned_loss=0.0284, over 944034.00 frames. 2023-03-09 01:35:53,395 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 01:35:56,696 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.034e+02 3.291e+02 3.800e+02 4.653e+02 8.329e+02, threshold=7.601e+02, percent-clipped=2.0 2023-03-09 01:36:03,941 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1380, 5.5671, 3.1525, 5.3156, 5.1979, 5.5785, 5.3545, 2.8822], device='cuda:3'), covar=tensor([0.0156, 0.0038, 0.0596, 0.0059, 0.0056, 0.0047, 0.0079, 0.0853], device='cuda:3'), in_proj_covar=tensor([0.0075, 0.0063, 0.0086, 0.0078, 0.0073, 0.0062, 0.0074, 0.0090], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0005, 0.0004, 0.0004, 0.0003, 0.0004, 0.0005], device='cuda:3') 2023-03-09 01:36:51,728 INFO [train.py:898] (3/4) Epoch 11, batch 50, loss[loss=0.1939, simple_loss=0.2835, pruned_loss=0.05213, over 18518.00 frames. ], tot_loss[loss=0.1914, simple_loss=0.2757, pruned_loss=0.05358, over 809116.10 frames. ], batch size: 53, lr: 1.06e-02, grad_scale: 16.0 2023-03-09 01:37:35,776 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=36427.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:37:51,426 INFO [train.py:898] (3/4) Epoch 11, batch 100, loss[loss=0.1681, simple_loss=0.2528, pruned_loss=0.04168, over 18536.00 frames. ], tot_loss[loss=0.1892, simple_loss=0.2738, pruned_loss=0.05228, over 1441209.28 frames. ], batch size: 49, lr: 1.06e-02, grad_scale: 16.0 2023-03-09 01:37:54,913 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.346e+02 3.421e+02 4.363e+02 5.513e+02 1.312e+03, threshold=8.726e+02, percent-clipped=4.0 2023-03-09 01:38:22,551 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=36467.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:38:32,187 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=36475.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:38:46,094 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6069, 2.8039, 2.7107, 2.8749, 3.6589, 3.4255, 3.1501, 2.9359], device='cuda:3'), covar=tensor([0.0196, 0.0284, 0.0534, 0.0376, 0.0171, 0.0195, 0.0309, 0.0379], device='cuda:3'), in_proj_covar=tensor([0.0116, 0.0101, 0.0147, 0.0133, 0.0100, 0.0085, 0.0130, 0.0128], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 01:38:50,323 INFO [train.py:898] (3/4) Epoch 11, batch 150, loss[loss=0.1647, simple_loss=0.2454, pruned_loss=0.04194, over 18511.00 frames. ], tot_loss[loss=0.1902, simple_loss=0.2752, pruned_loss=0.05264, over 1905609.47 frames. ], batch size: 44, lr: 1.06e-02, grad_scale: 16.0 2023-03-09 01:39:10,367 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.92 vs. limit=5.0 2023-03-09 01:39:18,307 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=36515.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:39:44,356 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=36537.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:39:48,711 INFO [train.py:898] (3/4) Epoch 11, batch 200, loss[loss=0.1946, simple_loss=0.2841, pruned_loss=0.05251, over 18560.00 frames. ], tot_loss[loss=0.1913, simple_loss=0.2757, pruned_loss=0.05345, over 2286837.31 frames. ], batch size: 54, lr: 1.06e-02, grad_scale: 16.0 2023-03-09 01:39:52,128 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.119e+02 3.228e+02 3.660e+02 4.259e+02 9.099e+02, threshold=7.320e+02, percent-clipped=1.0 2023-03-09 01:40:47,361 INFO [train.py:898] (3/4) Epoch 11, batch 250, loss[loss=0.1876, simple_loss=0.287, pruned_loss=0.04407, over 18636.00 frames. ], tot_loss[loss=0.1916, simple_loss=0.2758, pruned_loss=0.05372, over 2571772.14 frames. ], batch size: 52, lr: 1.06e-02, grad_scale: 8.0 2023-03-09 01:40:58,701 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-09 01:41:04,211 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36605.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:41:07,768 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36608.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:41:12,363 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36612.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:41:47,060 INFO [train.py:898] (3/4) Epoch 11, batch 300, loss[loss=0.1762, simple_loss=0.2488, pruned_loss=0.05185, over 18393.00 frames. ], tot_loss[loss=0.1905, simple_loss=0.2748, pruned_loss=0.05311, over 2806377.69 frames. ], batch size: 42, lr: 1.06e-02, grad_scale: 8.0 2023-03-09 01:41:51,511 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.170e+02 3.364e+02 4.244e+02 4.969e+02 8.450e+02, threshold=8.489e+02, percent-clipped=1.0 2023-03-09 01:42:15,711 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36666.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:42:19,083 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36669.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:42:23,417 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36673.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:42:45,383 INFO [train.py:898] (3/4) Epoch 11, batch 350, loss[loss=0.2097, simple_loss=0.2917, pruned_loss=0.06381, over 18298.00 frames. ], tot_loss[loss=0.1903, simple_loss=0.2749, pruned_loss=0.05288, over 2988535.33 frames. ], batch size: 57, lr: 1.06e-02, grad_scale: 8.0 2023-03-09 01:42:55,939 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36700.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 01:43:44,259 INFO [train.py:898] (3/4) Epoch 11, batch 400, loss[loss=0.1778, simple_loss=0.2717, pruned_loss=0.04188, over 18490.00 frames. ], tot_loss[loss=0.1902, simple_loss=0.2746, pruned_loss=0.05288, over 3115155.87 frames. ], batch size: 51, lr: 1.06e-02, grad_scale: 8.0 2023-03-09 01:43:48,754 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.208e+02 3.230e+02 3.792e+02 4.617e+02 9.263e+02, threshold=7.584e+02, percent-clipped=1.0 2023-03-09 01:44:07,427 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36761.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 01:44:42,728 INFO [train.py:898] (3/4) Epoch 11, batch 450, loss[loss=0.1656, simple_loss=0.2419, pruned_loss=0.04465, over 17735.00 frames. ], tot_loss[loss=0.1899, simple_loss=0.2744, pruned_loss=0.05275, over 3222118.93 frames. ], batch size: 39, lr: 1.05e-02, grad_scale: 8.0 2023-03-09 01:44:59,895 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36805.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 01:45:35,205 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5485, 5.2168, 5.6741, 5.7184, 5.3190, 6.2016, 5.8031, 5.4963], device='cuda:3'), covar=tensor([0.0825, 0.0584, 0.0697, 0.0563, 0.1419, 0.0664, 0.0582, 0.1769], device='cuda:3'), in_proj_covar=tensor([0.0304, 0.0237, 0.0251, 0.0251, 0.0284, 0.0351, 0.0231, 0.0340], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-09 01:45:36,437 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=36837.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:45:41,927 INFO [train.py:898] (3/4) Epoch 11, batch 500, loss[loss=0.1711, simple_loss=0.2648, pruned_loss=0.03867, over 18567.00 frames. ], tot_loss[loss=0.1892, simple_loss=0.2738, pruned_loss=0.0523, over 3306935.80 frames. ], batch size: 54, lr: 1.05e-02, grad_scale: 8.0 2023-03-09 01:45:47,160 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.052e+02 3.251e+02 4.103e+02 5.001e+02 1.385e+03, threshold=8.205e+02, percent-clipped=3.0 2023-03-09 01:45:59,964 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.26 vs. limit=5.0 2023-03-09 01:46:11,975 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36866.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 01:46:33,288 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=36885.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:46:39,980 INFO [train.py:898] (3/4) Epoch 11, batch 550, loss[loss=0.1692, simple_loss=0.2412, pruned_loss=0.04861, over 18399.00 frames. ], tot_loss[loss=0.1893, simple_loss=0.274, pruned_loss=0.05225, over 3373068.78 frames. ], batch size: 42, lr: 1.05e-02, grad_scale: 8.0 2023-03-09 01:47:04,744 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36911.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:47:16,299 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5821, 3.8271, 5.1839, 4.3261, 3.4262, 2.9762, 4.5895, 5.3318], device='cuda:3'), covar=tensor([0.0777, 0.1368, 0.0086, 0.0319, 0.0767, 0.1054, 0.0294, 0.0109], device='cuda:3'), in_proj_covar=tensor([0.0135, 0.0237, 0.0101, 0.0158, 0.0176, 0.0175, 0.0171, 0.0141], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-09 01:47:26,584 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4578, 3.0178, 2.5793, 2.8043, 3.5475, 3.5275, 3.1210, 3.1073], device='cuda:3'), covar=tensor([0.0161, 0.0240, 0.0578, 0.0310, 0.0163, 0.0117, 0.0309, 0.0264], device='cuda:3'), in_proj_covar=tensor([0.0118, 0.0104, 0.0152, 0.0137, 0.0103, 0.0086, 0.0136, 0.0130], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 01:47:38,613 INFO [train.py:898] (3/4) Epoch 11, batch 600, loss[loss=0.1847, simple_loss=0.2581, pruned_loss=0.05559, over 18241.00 frames. ], tot_loss[loss=0.1891, simple_loss=0.2741, pruned_loss=0.05201, over 3435836.72 frames. ], batch size: 45, lr: 1.05e-02, grad_scale: 8.0 2023-03-09 01:47:43,327 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.417e+02 3.303e+02 3.773e+02 4.556e+02 8.624e+02, threshold=7.545e+02, percent-clipped=1.0 2023-03-09 01:48:00,159 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36958.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:48:03,431 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=36961.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:48:06,920 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=36964.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:48:11,515 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=36968.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:48:16,206 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36972.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:48:26,434 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5186, 5.5010, 5.0510, 5.4061, 5.4468, 4.8100, 5.3403, 5.0861], device='cuda:3'), covar=tensor([0.0343, 0.0368, 0.1255, 0.0695, 0.0515, 0.0399, 0.0375, 0.0919], device='cuda:3'), in_proj_covar=tensor([0.0414, 0.0460, 0.0607, 0.0364, 0.0350, 0.0424, 0.0457, 0.0574], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 01:48:37,477 INFO [train.py:898] (3/4) Epoch 11, batch 650, loss[loss=0.2198, simple_loss=0.3118, pruned_loss=0.06384, over 17725.00 frames. ], tot_loss[loss=0.1888, simple_loss=0.2738, pruned_loss=0.05185, over 3473229.56 frames. ], batch size: 70, lr: 1.05e-02, grad_scale: 8.0 2023-03-09 01:48:37,803 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8680, 5.3289, 5.3369, 5.3889, 4.9214, 5.2219, 4.6543, 5.2275], device='cuda:3'), covar=tensor([0.0202, 0.0290, 0.0192, 0.0272, 0.0300, 0.0233, 0.1051, 0.0312], device='cuda:3'), in_proj_covar=tensor([0.0174, 0.0218, 0.0206, 0.0242, 0.0220, 0.0227, 0.0283, 0.0210], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 01:48:54,449 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=37004.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:49:12,024 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=37019.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:49:36,605 INFO [train.py:898] (3/4) Epoch 11, batch 700, loss[loss=0.1869, simple_loss=0.2743, pruned_loss=0.04978, over 18482.00 frames. ], tot_loss[loss=0.1886, simple_loss=0.2738, pruned_loss=0.05176, over 3504331.65 frames. ], batch size: 53, lr: 1.05e-02, grad_scale: 8.0 2023-03-09 01:49:40,956 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.338e+02 3.308e+02 3.875e+02 4.751e+02 1.116e+03, threshold=7.751e+02, percent-clipped=5.0 2023-03-09 01:49:54,669 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37056.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 01:50:05,419 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=37065.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:50:13,371 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2151, 5.1368, 5.4103, 5.3849, 5.1969, 6.0255, 5.5531, 5.4284], device='cuda:3'), covar=tensor([0.0785, 0.0592, 0.0704, 0.0615, 0.1190, 0.0696, 0.0675, 0.1422], device='cuda:3'), in_proj_covar=tensor([0.0305, 0.0235, 0.0250, 0.0250, 0.0284, 0.0352, 0.0232, 0.0340], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-09 01:50:34,420 INFO [train.py:898] (3/4) Epoch 11, batch 750, loss[loss=0.1827, simple_loss=0.2711, pruned_loss=0.04709, over 18608.00 frames. ], tot_loss[loss=0.1897, simple_loss=0.2747, pruned_loss=0.05235, over 3522421.89 frames. ], batch size: 52, lr: 1.05e-02, grad_scale: 8.0 2023-03-09 01:50:44,889 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4636, 5.4191, 4.9839, 5.3405, 5.3808, 4.7620, 5.2919, 5.0622], device='cuda:3'), covar=tensor([0.0368, 0.0391, 0.1326, 0.0688, 0.0517, 0.0415, 0.0394, 0.0912], device='cuda:3'), in_proj_covar=tensor([0.0410, 0.0458, 0.0604, 0.0361, 0.0348, 0.0420, 0.0455, 0.0576], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 01:51:33,960 INFO [train.py:898] (3/4) Epoch 11, batch 800, loss[loss=0.19, simple_loss=0.2772, pruned_loss=0.0514, over 18362.00 frames. ], tot_loss[loss=0.189, simple_loss=0.2738, pruned_loss=0.05208, over 3540974.54 frames. ], batch size: 55, lr: 1.05e-02, grad_scale: 8.0 2023-03-09 01:51:38,503 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.114e+02 3.217e+02 3.576e+02 4.437e+02 1.024e+03, threshold=7.151e+02, percent-clipped=5.0 2023-03-09 01:51:58,754 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37161.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 01:52:27,071 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4012, 1.9607, 2.3697, 2.5428, 2.8989, 4.5925, 4.1945, 3.4173], device='cuda:3'), covar=tensor([0.1471, 0.2562, 0.2699, 0.1589, 0.2490, 0.0157, 0.0443, 0.0605], device='cuda:3'), in_proj_covar=tensor([0.0240, 0.0296, 0.0309, 0.0247, 0.0360, 0.0188, 0.0258, 0.0201], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 01:52:33,370 INFO [train.py:898] (3/4) Epoch 11, batch 850, loss[loss=0.1799, simple_loss=0.2675, pruned_loss=0.04621, over 18505.00 frames. ], tot_loss[loss=0.1893, simple_loss=0.274, pruned_loss=0.05227, over 3553250.35 frames. ], batch size: 47, lr: 1.05e-02, grad_scale: 8.0 2023-03-09 01:52:41,750 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5507, 2.8785, 2.4390, 2.7409, 3.6537, 3.5521, 3.0854, 2.9874], device='cuda:3'), covar=tensor([0.0159, 0.0257, 0.0588, 0.0297, 0.0117, 0.0112, 0.0290, 0.0258], device='cuda:3'), in_proj_covar=tensor([0.0115, 0.0104, 0.0148, 0.0133, 0.0099, 0.0084, 0.0131, 0.0127], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 01:53:32,675 INFO [train.py:898] (3/4) Epoch 11, batch 900, loss[loss=0.2078, simple_loss=0.2974, pruned_loss=0.05911, over 18614.00 frames. ], tot_loss[loss=0.1886, simple_loss=0.2738, pruned_loss=0.0517, over 3575511.98 frames. ], batch size: 52, lr: 1.05e-02, grad_scale: 4.0 2023-03-09 01:53:38,419 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.869e+02 3.048e+02 3.504e+02 4.549e+02 1.028e+03, threshold=7.008e+02, percent-clipped=4.0 2023-03-09 01:53:56,604 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37261.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:54:00,404 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37264.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:54:04,633 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37267.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:54:05,820 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37268.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:54:05,900 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=37268.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:54:32,482 INFO [train.py:898] (3/4) Epoch 11, batch 950, loss[loss=0.1933, simple_loss=0.2871, pruned_loss=0.04973, over 18344.00 frames. ], tot_loss[loss=0.1885, simple_loss=0.2738, pruned_loss=0.05161, over 3582943.82 frames. ], batch size: 55, lr: 1.05e-02, grad_scale: 4.0 2023-03-09 01:54:52,610 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=37309.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:54:52,888 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3311, 4.4822, 2.4800, 4.4004, 5.3311, 2.6138, 3.9486, 3.9511], device='cuda:3'), covar=tensor([0.0082, 0.1057, 0.1494, 0.0481, 0.0046, 0.1195, 0.0596, 0.0673], device='cuda:3'), in_proj_covar=tensor([0.0113, 0.0227, 0.0189, 0.0187, 0.0088, 0.0172, 0.0199, 0.0203], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 01:54:56,035 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=37312.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:54:58,792 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37314.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:55:01,124 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=37316.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:55:13,946 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4168, 5.9394, 5.4426, 5.7454, 5.4869, 5.4443, 6.0218, 5.9214], device='cuda:3'), covar=tensor([0.1108, 0.0752, 0.0438, 0.0690, 0.1424, 0.0604, 0.0508, 0.0681], device='cuda:3'), in_proj_covar=tensor([0.0518, 0.0426, 0.0326, 0.0469, 0.0640, 0.0470, 0.0596, 0.0447], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 01:55:18,028 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=37329.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:55:31,163 INFO [train.py:898] (3/4) Epoch 11, batch 1000, loss[loss=0.2023, simple_loss=0.2862, pruned_loss=0.05922, over 17803.00 frames. ], tot_loss[loss=0.1888, simple_loss=0.274, pruned_loss=0.05175, over 3589253.88 frames. ], batch size: 70, lr: 1.05e-02, grad_scale: 4.0 2023-03-09 01:55:36,613 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.267e+02 3.387e+02 4.010e+02 5.026e+02 9.863e+02, threshold=8.020e+02, percent-clipped=4.0 2023-03-09 01:55:42,644 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6968, 3.4250, 5.1469, 3.0537, 4.2873, 2.5321, 3.0254, 2.0070], device='cuda:3'), covar=tensor([0.1042, 0.0929, 0.0074, 0.0673, 0.0609, 0.2298, 0.2423, 0.1912], device='cuda:3'), in_proj_covar=tensor([0.0194, 0.0212, 0.0115, 0.0166, 0.0225, 0.0244, 0.0277, 0.0207], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 01:55:48,155 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37356.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 01:55:52,761 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37360.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:55:57,534 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=37364.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:56:12,259 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.77 vs. limit=2.0 2023-03-09 01:56:29,968 INFO [train.py:898] (3/4) Epoch 11, batch 1050, loss[loss=0.1946, simple_loss=0.2899, pruned_loss=0.04965, over 18306.00 frames. ], tot_loss[loss=0.1896, simple_loss=0.2748, pruned_loss=0.05222, over 3589528.93 frames. ], batch size: 54, lr: 1.05e-02, grad_scale: 4.0 2023-03-09 01:56:44,682 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=37404.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 01:57:09,669 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=37425.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 01:57:28,552 INFO [train.py:898] (3/4) Epoch 11, batch 1100, loss[loss=0.1919, simple_loss=0.2856, pruned_loss=0.04914, over 18388.00 frames. ], tot_loss[loss=0.1893, simple_loss=0.2744, pruned_loss=0.05205, over 3589611.65 frames. ], batch size: 52, lr: 1.05e-02, grad_scale: 4.0 2023-03-09 01:57:34,030 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.186e+02 3.484e+02 3.927e+02 4.853e+02 9.182e+02, threshold=7.853e+02, percent-clipped=3.0 2023-03-09 01:57:51,387 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37461.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 01:58:27,574 INFO [train.py:898] (3/4) Epoch 11, batch 1150, loss[loss=0.1729, simple_loss=0.2606, pruned_loss=0.04267, over 18270.00 frames. ], tot_loss[loss=0.189, simple_loss=0.274, pruned_loss=0.05201, over 3580765.96 frames. ], batch size: 47, lr: 1.04e-02, grad_scale: 4.0 2023-03-09 01:58:48,402 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=37509.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 01:59:26,266 INFO [train.py:898] (3/4) Epoch 11, batch 1200, loss[loss=0.2258, simple_loss=0.3084, pruned_loss=0.07156, over 12600.00 frames. ], tot_loss[loss=0.1897, simple_loss=0.2744, pruned_loss=0.05248, over 3571755.33 frames. ], batch size: 129, lr: 1.04e-02, grad_scale: 8.0 2023-03-09 01:59:31,793 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.211e+02 3.034e+02 3.620e+02 4.493e+02 1.296e+03, threshold=7.239e+02, percent-clipped=3.0 2023-03-09 01:59:55,909 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37567.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:00:24,872 INFO [train.py:898] (3/4) Epoch 11, batch 1250, loss[loss=0.1935, simple_loss=0.2751, pruned_loss=0.05593, over 18418.00 frames. ], tot_loss[loss=0.1884, simple_loss=0.2729, pruned_loss=0.05201, over 3577252.31 frames. ], batch size: 52, lr: 1.04e-02, grad_scale: 8.0 2023-03-09 02:00:41,346 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5598, 2.0485, 2.5567, 2.6980, 3.2141, 4.6156, 4.1396, 3.6794], device='cuda:3'), covar=tensor([0.1214, 0.2036, 0.2350, 0.1394, 0.1787, 0.0139, 0.0441, 0.0484], device='cuda:3'), in_proj_covar=tensor([0.0240, 0.0297, 0.0311, 0.0247, 0.0359, 0.0186, 0.0259, 0.0202], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 02:00:52,563 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37614.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:00:53,589 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=37615.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:00:57,229 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6523, 3.5749, 2.1613, 4.4079, 3.2498, 4.5014, 2.4792, 4.0422], device='cuda:3'), covar=tensor([0.0542, 0.0775, 0.1404, 0.0374, 0.0774, 0.0218, 0.1080, 0.0349], device='cuda:3'), in_proj_covar=tensor([0.0191, 0.0211, 0.0178, 0.0234, 0.0181, 0.0236, 0.0188, 0.0186], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 02:01:04,071 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37624.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:01:07,737 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1314, 4.1148, 5.1651, 2.9271, 4.4187, 2.6739, 3.0880, 2.0706], device='cuda:3'), covar=tensor([0.0803, 0.0601, 0.0072, 0.0703, 0.0511, 0.2163, 0.2408, 0.1678], device='cuda:3'), in_proj_covar=tensor([0.0195, 0.0213, 0.0117, 0.0167, 0.0225, 0.0245, 0.0279, 0.0208], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 02:01:24,580 INFO [train.py:898] (3/4) Epoch 11, batch 1300, loss[loss=0.188, simple_loss=0.2826, pruned_loss=0.04674, over 18504.00 frames. ], tot_loss[loss=0.1893, simple_loss=0.274, pruned_loss=0.05232, over 3577390.43 frames. ], batch size: 51, lr: 1.04e-02, grad_scale: 8.0 2023-03-09 02:01:31,765 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.066e+02 3.446e+02 4.010e+02 4.726e+02 9.288e+02, threshold=8.020e+02, percent-clipped=3.0 2023-03-09 02:01:46,677 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37660.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:01:48,851 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=37662.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:02:00,426 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7834, 4.8218, 4.8972, 4.6312, 4.5709, 4.6204, 4.9985, 4.9576], device='cuda:3'), covar=tensor([0.0060, 0.0061, 0.0058, 0.0086, 0.0062, 0.0115, 0.0068, 0.0082], device='cuda:3'), in_proj_covar=tensor([0.0080, 0.0056, 0.0058, 0.0074, 0.0061, 0.0086, 0.0072, 0.0071], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 02:02:23,188 INFO [train.py:898] (3/4) Epoch 11, batch 1350, loss[loss=0.1625, simple_loss=0.2439, pruned_loss=0.04055, over 18450.00 frames. ], tot_loss[loss=0.1886, simple_loss=0.2733, pruned_loss=0.05193, over 3585464.75 frames. ], batch size: 43, lr: 1.04e-02, grad_scale: 4.0 2023-03-09 02:02:43,280 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=37708.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:02:56,749 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37720.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:03:21,776 INFO [train.py:898] (3/4) Epoch 11, batch 1400, loss[loss=0.1902, simple_loss=0.2827, pruned_loss=0.04885, over 18562.00 frames. ], tot_loss[loss=0.1891, simple_loss=0.2739, pruned_loss=0.05218, over 3585809.81 frames. ], batch size: 54, lr: 1.04e-02, grad_scale: 4.0 2023-03-09 02:03:29,180 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.922e+02 3.017e+02 3.586e+02 4.269e+02 9.001e+02, threshold=7.171e+02, percent-clipped=1.0 2023-03-09 02:04:20,157 INFO [train.py:898] (3/4) Epoch 11, batch 1450, loss[loss=0.1882, simple_loss=0.2876, pruned_loss=0.04441, over 18340.00 frames. ], tot_loss[loss=0.1879, simple_loss=0.2727, pruned_loss=0.05158, over 3590612.39 frames. ], batch size: 55, lr: 1.04e-02, grad_scale: 4.0 2023-03-09 02:05:19,790 INFO [train.py:898] (3/4) Epoch 11, batch 1500, loss[loss=0.1582, simple_loss=0.2363, pruned_loss=0.04, over 18377.00 frames. ], tot_loss[loss=0.1887, simple_loss=0.2736, pruned_loss=0.05194, over 3590303.42 frames. ], batch size: 42, lr: 1.04e-02, grad_scale: 4.0 2023-03-09 02:05:27,763 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.812e+02 3.086e+02 3.620e+02 4.300e+02 1.406e+03, threshold=7.239e+02, percent-clipped=4.0 2023-03-09 02:06:18,398 INFO [train.py:898] (3/4) Epoch 11, batch 1550, loss[loss=0.1955, simple_loss=0.2798, pruned_loss=0.05559, over 18559.00 frames. ], tot_loss[loss=0.1889, simple_loss=0.2736, pruned_loss=0.05209, over 3591791.03 frames. ], batch size: 54, lr: 1.04e-02, grad_scale: 4.0 2023-03-09 02:06:18,715 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3235, 5.3024, 4.9374, 5.2284, 5.3209, 4.5511, 5.1691, 4.9731], device='cuda:3'), covar=tensor([0.0426, 0.0425, 0.1366, 0.0806, 0.0550, 0.0506, 0.0412, 0.0944], device='cuda:3'), in_proj_covar=tensor([0.0415, 0.0465, 0.0618, 0.0365, 0.0355, 0.0425, 0.0457, 0.0588], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 02:06:58,388 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37924.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:07:17,439 INFO [train.py:898] (3/4) Epoch 11, batch 1600, loss[loss=0.1747, simple_loss=0.2579, pruned_loss=0.04578, over 18297.00 frames. ], tot_loss[loss=0.188, simple_loss=0.2726, pruned_loss=0.05172, over 3593513.80 frames. ], batch size: 49, lr: 1.04e-02, grad_scale: 8.0 2023-03-09 02:07:24,291 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.933e+02 3.097e+02 3.749e+02 4.631e+02 9.709e+02, threshold=7.497e+02, percent-clipped=4.0 2023-03-09 02:07:35,030 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4458, 3.7549, 5.1186, 4.3079, 3.1299, 2.7124, 4.2024, 5.2642], device='cuda:3'), covar=tensor([0.1015, 0.1452, 0.0102, 0.0379, 0.0981, 0.1282, 0.0419, 0.0223], device='cuda:3'), in_proj_covar=tensor([0.0138, 0.0243, 0.0102, 0.0159, 0.0177, 0.0174, 0.0173, 0.0143], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-09 02:07:54,471 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=37972.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:08:02,556 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=37979.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:08:15,892 INFO [train.py:898] (3/4) Epoch 11, batch 1650, loss[loss=0.208, simple_loss=0.295, pruned_loss=0.06048, over 18085.00 frames. ], tot_loss[loss=0.1891, simple_loss=0.2739, pruned_loss=0.05214, over 3593820.27 frames. ], batch size: 62, lr: 1.04e-02, grad_scale: 8.0 2023-03-09 02:08:25,655 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.95 vs. limit=2.0 2023-03-09 02:08:54,577 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.78 vs. limit=5.0 2023-03-09 02:08:55,148 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=38020.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:09:18,077 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=38040.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:09:18,752 INFO [train.py:898] (3/4) Epoch 11, batch 1700, loss[loss=0.1667, simple_loss=0.2449, pruned_loss=0.04423, over 18266.00 frames. ], tot_loss[loss=0.1895, simple_loss=0.274, pruned_loss=0.05247, over 3594594.96 frames. ], batch size: 47, lr: 1.04e-02, grad_scale: 8.0 2023-03-09 02:09:25,204 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.262e+02 3.409e+02 3.922e+02 5.492e+02 2.210e+03, threshold=7.843e+02, percent-clipped=9.0 2023-03-09 02:09:50,078 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=38068.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:09:55,234 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.44 vs. limit=2.0 2023-03-09 02:10:16,084 INFO [train.py:898] (3/4) Epoch 11, batch 1750, loss[loss=0.1704, simple_loss=0.2579, pruned_loss=0.04146, over 18536.00 frames. ], tot_loss[loss=0.1898, simple_loss=0.2743, pruned_loss=0.05258, over 3597231.87 frames. ], batch size: 49, lr: 1.04e-02, grad_scale: 8.0 2023-03-09 02:11:15,128 INFO [train.py:898] (3/4) Epoch 11, batch 1800, loss[loss=0.1835, simple_loss=0.2745, pruned_loss=0.04622, over 18492.00 frames. ], tot_loss[loss=0.1896, simple_loss=0.2741, pruned_loss=0.05259, over 3594088.70 frames. ], batch size: 51, lr: 1.04e-02, grad_scale: 8.0 2023-03-09 02:11:21,537 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.087e+02 3.079e+02 3.713e+02 4.653e+02 8.656e+02, threshold=7.427e+02, percent-clipped=3.0 2023-03-09 02:11:30,615 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5232, 3.4505, 4.9989, 4.3239, 3.1860, 2.8240, 4.2996, 5.0810], device='cuda:3'), covar=tensor([0.0836, 0.1520, 0.0090, 0.0293, 0.0844, 0.1146, 0.0342, 0.0144], device='cuda:3'), in_proj_covar=tensor([0.0136, 0.0241, 0.0100, 0.0158, 0.0174, 0.0173, 0.0170, 0.0142], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-09 02:12:12,413 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-09 02:12:12,797 INFO [train.py:898] (3/4) Epoch 11, batch 1850, loss[loss=0.1651, simple_loss=0.244, pruned_loss=0.04308, over 17636.00 frames. ], tot_loss[loss=0.1904, simple_loss=0.2747, pruned_loss=0.05308, over 3591266.37 frames. ], batch size: 39, lr: 1.04e-02, grad_scale: 8.0 2023-03-09 02:12:33,447 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0051, 3.5778, 5.0186, 3.0710, 4.2030, 2.5397, 3.0591, 1.9162], device='cuda:3'), covar=tensor([0.0856, 0.0830, 0.0081, 0.0628, 0.0588, 0.2346, 0.2285, 0.1788], device='cuda:3'), in_proj_covar=tensor([0.0196, 0.0216, 0.0118, 0.0169, 0.0230, 0.0248, 0.0281, 0.0211], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 02:12:40,157 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.52 vs. limit=5.0 2023-03-09 02:13:12,758 INFO [train.py:898] (3/4) Epoch 11, batch 1900, loss[loss=0.2206, simple_loss=0.3011, pruned_loss=0.07005, over 18451.00 frames. ], tot_loss[loss=0.1903, simple_loss=0.2747, pruned_loss=0.05297, over 3590469.37 frames. ], batch size: 59, lr: 1.03e-02, grad_scale: 8.0 2023-03-09 02:13:13,211 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6282, 2.8080, 2.4551, 2.6760, 3.5820, 3.4617, 3.1604, 2.8738], device='cuda:3'), covar=tensor([0.0157, 0.0308, 0.0687, 0.0405, 0.0216, 0.0149, 0.0325, 0.0369], device='cuda:3'), in_proj_covar=tensor([0.0119, 0.0106, 0.0150, 0.0138, 0.0104, 0.0088, 0.0136, 0.0129], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 02:13:19,699 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.086e+02 3.362e+02 3.965e+02 4.724e+02 1.180e+03, threshold=7.931e+02, percent-clipped=5.0 2023-03-09 02:13:32,892 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.99 vs. limit=2.0 2023-03-09 02:13:34,762 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1549, 5.2893, 2.8354, 5.1251, 5.0097, 5.3329, 5.1250, 2.5548], device='cuda:3'), covar=tensor([0.0154, 0.0073, 0.0709, 0.0076, 0.0067, 0.0063, 0.0104, 0.1002], device='cuda:3'), in_proj_covar=tensor([0.0075, 0.0067, 0.0087, 0.0080, 0.0075, 0.0065, 0.0076, 0.0090], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 02:14:02,988 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1579, 5.3431, 3.0448, 5.1677, 5.0551, 5.3653, 5.2077, 2.8084], device='cuda:3'), covar=tensor([0.0147, 0.0059, 0.0632, 0.0068, 0.0068, 0.0067, 0.0078, 0.0932], device='cuda:3'), in_proj_covar=tensor([0.0076, 0.0066, 0.0087, 0.0080, 0.0075, 0.0065, 0.0076, 0.0090], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 02:14:11,594 INFO [train.py:898] (3/4) Epoch 11, batch 1950, loss[loss=0.156, simple_loss=0.241, pruned_loss=0.03553, over 18400.00 frames. ], tot_loss[loss=0.1896, simple_loss=0.274, pruned_loss=0.05259, over 3586471.11 frames. ], batch size: 48, lr: 1.03e-02, grad_scale: 8.0 2023-03-09 02:14:42,813 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.18 vs. limit=5.0 2023-03-09 02:15:01,201 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-09 02:15:03,836 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-09 02:15:04,239 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=38335.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:15:10,805 INFO [train.py:898] (3/4) Epoch 11, batch 2000, loss[loss=0.1919, simple_loss=0.2817, pruned_loss=0.05103, over 18544.00 frames. ], tot_loss[loss=0.1899, simple_loss=0.2743, pruned_loss=0.05278, over 3570058.90 frames. ], batch size: 54, lr: 1.03e-02, grad_scale: 8.0 2023-03-09 02:15:17,659 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.188e+02 3.163e+02 3.706e+02 4.529e+02 9.366e+02, threshold=7.411e+02, percent-clipped=1.0 2023-03-09 02:15:32,712 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=38360.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:15:57,546 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5097, 3.5170, 2.0964, 4.3172, 3.0135, 4.3838, 2.3242, 3.8630], device='cuda:3'), covar=tensor([0.0557, 0.0793, 0.1497, 0.0458, 0.0820, 0.0264, 0.1232, 0.0416], device='cuda:3'), in_proj_covar=tensor([0.0193, 0.0213, 0.0178, 0.0240, 0.0180, 0.0241, 0.0191, 0.0187], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 02:15:58,940 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-09 02:16:08,896 INFO [train.py:898] (3/4) Epoch 11, batch 2050, loss[loss=0.21, simple_loss=0.2902, pruned_loss=0.0649, over 18355.00 frames. ], tot_loss[loss=0.1902, simple_loss=0.2747, pruned_loss=0.05285, over 3576367.77 frames. ], batch size: 56, lr: 1.03e-02, grad_scale: 8.0 2023-03-09 02:16:10,524 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.48 vs. limit=5.0 2023-03-09 02:16:44,609 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=38421.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:17:08,278 INFO [train.py:898] (3/4) Epoch 11, batch 2100, loss[loss=0.1974, simple_loss=0.28, pruned_loss=0.05744, over 17802.00 frames. ], tot_loss[loss=0.1896, simple_loss=0.274, pruned_loss=0.05259, over 3565288.03 frames. ], batch size: 70, lr: 1.03e-02, grad_scale: 4.0 2023-03-09 02:17:15,906 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.369e+02 3.250e+02 4.019e+02 4.989e+02 1.105e+03, threshold=8.037e+02, percent-clipped=2.0 2023-03-09 02:18:07,060 INFO [train.py:898] (3/4) Epoch 11, batch 2150, loss[loss=0.2487, simple_loss=0.3063, pruned_loss=0.09552, over 12336.00 frames. ], tot_loss[loss=0.19, simple_loss=0.2743, pruned_loss=0.05282, over 3567558.80 frames. ], batch size: 129, lr: 1.03e-02, grad_scale: 4.0 2023-03-09 02:19:05,993 INFO [train.py:898] (3/4) Epoch 11, batch 2200, loss[loss=0.1749, simple_loss=0.2597, pruned_loss=0.04504, over 18550.00 frames. ], tot_loss[loss=0.1893, simple_loss=0.2737, pruned_loss=0.05244, over 3584061.40 frames. ], batch size: 49, lr: 1.03e-02, grad_scale: 4.0 2023-03-09 02:19:07,856 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.75 vs. limit=2.0 2023-03-09 02:19:13,834 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.988e+02 3.259e+02 3.995e+02 5.001e+02 1.029e+03, threshold=7.990e+02, percent-clipped=4.0 2023-03-09 02:20:04,936 INFO [train.py:898] (3/4) Epoch 11, batch 2250, loss[loss=0.1943, simple_loss=0.2851, pruned_loss=0.05174, over 18342.00 frames. ], tot_loss[loss=0.1893, simple_loss=0.2741, pruned_loss=0.05221, over 3589699.11 frames. ], batch size: 55, lr: 1.03e-02, grad_scale: 4.0 2023-03-09 02:20:57,244 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=38635.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:21:04,279 INFO [train.py:898] (3/4) Epoch 11, batch 2300, loss[loss=0.2427, simple_loss=0.3144, pruned_loss=0.08553, over 12348.00 frames. ], tot_loss[loss=0.1895, simple_loss=0.2741, pruned_loss=0.0524, over 3574375.56 frames. ], batch size: 130, lr: 1.03e-02, grad_scale: 4.0 2023-03-09 02:21:12,485 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.254e+02 3.045e+02 3.798e+02 4.329e+02 8.065e+02, threshold=7.597e+02, percent-clipped=1.0 2023-03-09 02:21:18,781 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.63 vs. limit=2.0 2023-03-09 02:21:19,732 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7146, 5.2570, 5.2616, 5.2900, 4.8203, 5.0860, 4.5649, 5.0704], device='cuda:3'), covar=tensor([0.0274, 0.0302, 0.0232, 0.0327, 0.0416, 0.0282, 0.1211, 0.0377], device='cuda:3'), in_proj_covar=tensor([0.0177, 0.0222, 0.0208, 0.0250, 0.0224, 0.0227, 0.0284, 0.0212], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 02:21:33,934 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-09 02:21:54,107 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=38683.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:22:03,154 INFO [train.py:898] (3/4) Epoch 11, batch 2350, loss[loss=0.2194, simple_loss=0.2972, pruned_loss=0.07082, over 18355.00 frames. ], tot_loss[loss=0.1898, simple_loss=0.2742, pruned_loss=0.05271, over 3567378.59 frames. ], batch size: 56, lr: 1.03e-02, grad_scale: 4.0 2023-03-09 02:22:32,501 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=38716.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:22:57,606 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6903, 3.7388, 5.0924, 4.2804, 3.2828, 2.9863, 4.5801, 5.2234], device='cuda:3'), covar=tensor([0.0761, 0.1454, 0.0163, 0.0348, 0.0834, 0.1041, 0.0312, 0.0210], device='cuda:3'), in_proj_covar=tensor([0.0136, 0.0242, 0.0104, 0.0159, 0.0178, 0.0176, 0.0172, 0.0146], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-09 02:23:01,665 INFO [train.py:898] (3/4) Epoch 11, batch 2400, loss[loss=0.1912, simple_loss=0.2778, pruned_loss=0.05227, over 18299.00 frames. ], tot_loss[loss=0.189, simple_loss=0.2736, pruned_loss=0.05223, over 3580751.40 frames. ], batch size: 54, lr: 1.03e-02, grad_scale: 8.0 2023-03-09 02:23:10,150 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.250e+02 3.115e+02 4.067e+02 4.907e+02 9.173e+02, threshold=8.134e+02, percent-clipped=4.0 2023-03-09 02:23:12,119 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.94 vs. limit=2.0 2023-03-09 02:24:00,885 INFO [train.py:898] (3/4) Epoch 11, batch 2450, loss[loss=0.1793, simple_loss=0.2629, pruned_loss=0.0479, over 18342.00 frames. ], tot_loss[loss=0.1885, simple_loss=0.2728, pruned_loss=0.05206, over 3574818.66 frames. ], batch size: 56, lr: 1.03e-02, grad_scale: 8.0 2023-03-09 02:24:38,884 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=38823.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:24:57,968 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1927, 5.1282, 5.3192, 5.2941, 5.1413, 5.9435, 5.5428, 5.2663], device='cuda:3'), covar=tensor([0.0986, 0.0583, 0.0575, 0.0709, 0.1294, 0.0644, 0.0621, 0.1513], device='cuda:3'), in_proj_covar=tensor([0.0309, 0.0239, 0.0251, 0.0255, 0.0290, 0.0356, 0.0234, 0.0346], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0004, 0.0002, 0.0003], device='cuda:3') 2023-03-09 02:24:59,962 INFO [train.py:898] (3/4) Epoch 11, batch 2500, loss[loss=0.1769, simple_loss=0.2514, pruned_loss=0.05118, over 18388.00 frames. ], tot_loss[loss=0.1885, simple_loss=0.2729, pruned_loss=0.05209, over 3569455.58 frames. ], batch size: 42, lr: 1.03e-02, grad_scale: 8.0 2023-03-09 02:25:08,486 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.178e+02 3.118e+02 3.887e+02 4.654e+02 1.248e+03, threshold=7.775e+02, percent-clipped=2.0 2023-03-09 02:25:50,704 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=38884.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:25:58,831 INFO [train.py:898] (3/4) Epoch 11, batch 2550, loss[loss=0.1988, simple_loss=0.2857, pruned_loss=0.05591, over 18300.00 frames. ], tot_loss[loss=0.1887, simple_loss=0.2732, pruned_loss=0.05206, over 3575504.88 frames. ], batch size: 54, lr: 1.03e-02, grad_scale: 8.0 2023-03-09 02:26:22,097 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0198, 4.9509, 5.0947, 4.7981, 4.7612, 4.8723, 5.1823, 5.1388], device='cuda:3'), covar=tensor([0.0058, 0.0070, 0.0067, 0.0093, 0.0059, 0.0094, 0.0084, 0.0086], device='cuda:3'), in_proj_covar=tensor([0.0080, 0.0056, 0.0059, 0.0075, 0.0061, 0.0086, 0.0071, 0.0071], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 02:26:43,533 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.98 vs. limit=2.0 2023-03-09 02:26:57,552 INFO [train.py:898] (3/4) Epoch 11, batch 2600, loss[loss=0.1903, simple_loss=0.2745, pruned_loss=0.05301, over 18276.00 frames. ], tot_loss[loss=0.1876, simple_loss=0.2721, pruned_loss=0.05158, over 3584382.43 frames. ], batch size: 47, lr: 1.03e-02, grad_scale: 8.0 2023-03-09 02:27:04,367 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.68 vs. limit=5.0 2023-03-09 02:27:06,473 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.982e+02 2.998e+02 3.498e+02 4.234e+02 9.480e+02, threshold=6.995e+02, percent-clipped=2.0 2023-03-09 02:27:56,943 INFO [train.py:898] (3/4) Epoch 11, batch 2650, loss[loss=0.2121, simple_loss=0.2962, pruned_loss=0.06399, over 18355.00 frames. ], tot_loss[loss=0.1874, simple_loss=0.2721, pruned_loss=0.05133, over 3594845.77 frames. ], batch size: 56, lr: 1.02e-02, grad_scale: 8.0 2023-03-09 02:28:21,689 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4923, 5.0761, 5.0304, 5.2595, 4.5141, 4.8441, 3.8927, 4.9501], device='cuda:3'), covar=tensor([0.0318, 0.0438, 0.0346, 0.0315, 0.0479, 0.0384, 0.2039, 0.0382], device='cuda:3'), in_proj_covar=tensor([0.0176, 0.0222, 0.0207, 0.0249, 0.0222, 0.0225, 0.0285, 0.0211], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 02:28:27,473 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=39016.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:28:56,263 INFO [train.py:898] (3/4) Epoch 11, batch 2700, loss[loss=0.1607, simple_loss=0.249, pruned_loss=0.03624, over 18551.00 frames. ], tot_loss[loss=0.188, simple_loss=0.273, pruned_loss=0.05148, over 3587028.78 frames. ], batch size: 49, lr: 1.02e-02, grad_scale: 8.0 2023-03-09 02:29:04,927 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.335e+02 3.292e+02 3.968e+02 4.768e+02 1.831e+03, threshold=7.936e+02, percent-clipped=8.0 2023-03-09 02:29:18,508 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-09 02:29:24,711 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=39064.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:29:46,177 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8833, 3.5085, 4.8868, 2.7671, 4.1697, 2.5512, 3.0738, 1.6337], device='cuda:3'), covar=tensor([0.0892, 0.0862, 0.0082, 0.0713, 0.0612, 0.2112, 0.2118, 0.1880], device='cuda:3'), in_proj_covar=tensor([0.0195, 0.0216, 0.0117, 0.0169, 0.0229, 0.0247, 0.0283, 0.0210], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 02:29:55,896 INFO [train.py:898] (3/4) Epoch 11, batch 2750, loss[loss=0.1676, simple_loss=0.2608, pruned_loss=0.03716, over 18284.00 frames. ], tot_loss[loss=0.1881, simple_loss=0.2728, pruned_loss=0.05173, over 3579990.20 frames. ], batch size: 49, lr: 1.02e-02, grad_scale: 8.0 2023-03-09 02:30:22,156 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8150, 2.8531, 2.6590, 2.7207, 3.6270, 3.4459, 3.1941, 2.9925], device='cuda:3'), covar=tensor([0.0162, 0.0269, 0.0520, 0.0349, 0.0208, 0.0175, 0.0331, 0.0339], device='cuda:3'), in_proj_covar=tensor([0.0120, 0.0107, 0.0148, 0.0135, 0.0104, 0.0090, 0.0134, 0.0129], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 02:30:55,618 INFO [train.py:898] (3/4) Epoch 11, batch 2800, loss[loss=0.1796, simple_loss=0.263, pruned_loss=0.0481, over 18360.00 frames. ], tot_loss[loss=0.1877, simple_loss=0.2726, pruned_loss=0.05145, over 3590898.64 frames. ], batch size: 46, lr: 1.02e-02, grad_scale: 8.0 2023-03-09 02:31:04,050 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.053e+02 3.386e+02 4.032e+02 4.876e+02 1.472e+03, threshold=8.064e+02, percent-clipped=5.0 2023-03-09 02:31:42,093 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=39179.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:31:49,038 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=39185.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:31:55,188 INFO [train.py:898] (3/4) Epoch 11, batch 2850, loss[loss=0.2195, simple_loss=0.3054, pruned_loss=0.06678, over 18293.00 frames. ], tot_loss[loss=0.1888, simple_loss=0.2736, pruned_loss=0.05197, over 3585527.70 frames. ], batch size: 54, lr: 1.02e-02, grad_scale: 8.0 2023-03-09 02:32:54,417 INFO [train.py:898] (3/4) Epoch 11, batch 2900, loss[loss=0.1531, simple_loss=0.235, pruned_loss=0.03561, over 18266.00 frames. ], tot_loss[loss=0.188, simple_loss=0.2728, pruned_loss=0.05163, over 3589119.34 frames. ], batch size: 45, lr: 1.02e-02, grad_scale: 8.0 2023-03-09 02:33:00,514 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=39246.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:33:02,139 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.400e+02 3.149e+02 3.663e+02 4.555e+02 1.238e+03, threshold=7.326e+02, percent-clipped=2.0 2023-03-09 02:33:05,835 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2995, 4.3066, 2.5955, 4.3248, 5.2665, 2.4189, 3.7815, 3.8800], device='cuda:3'), covar=tensor([0.0060, 0.1046, 0.1521, 0.0484, 0.0038, 0.1387, 0.0694, 0.0774], device='cuda:3'), in_proj_covar=tensor([0.0117, 0.0235, 0.0191, 0.0187, 0.0090, 0.0176, 0.0202, 0.0205], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 02:33:45,333 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=39284.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:33:53,633 INFO [train.py:898] (3/4) Epoch 11, batch 2950, loss[loss=0.1708, simple_loss=0.2527, pruned_loss=0.04441, over 18355.00 frames. ], tot_loss[loss=0.1866, simple_loss=0.2712, pruned_loss=0.05103, over 3588945.68 frames. ], batch size: 46, lr: 1.02e-02, grad_scale: 8.0 2023-03-09 02:34:32,738 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.95 vs. limit=2.0 2023-03-09 02:34:53,169 INFO [train.py:898] (3/4) Epoch 11, batch 3000, loss[loss=0.2175, simple_loss=0.2997, pruned_loss=0.06766, over 18343.00 frames. ], tot_loss[loss=0.1857, simple_loss=0.2707, pruned_loss=0.05041, over 3597317.37 frames. ], batch size: 56, lr: 1.02e-02, grad_scale: 8.0 2023-03-09 02:34:53,169 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 02:35:05,602 INFO [train.py:932] (3/4) Epoch 11, validation: loss=0.1587, simple_loss=0.2603, pruned_loss=0.02852, over 944034.00 frames. 2023-03-09 02:35:05,603 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 02:35:10,807 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=39345.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:35:13,866 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.978e+02 3.242e+02 3.927e+02 4.658e+02 9.416e+02, threshold=7.854e+02, percent-clipped=4.0 2023-03-09 02:35:31,305 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.61 vs. limit=5.0 2023-03-09 02:35:40,360 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5439, 5.4914, 5.0530, 5.4547, 5.4172, 4.7696, 5.3669, 5.1217], device='cuda:3'), covar=tensor([0.0352, 0.0360, 0.1348, 0.0728, 0.0494, 0.0380, 0.0365, 0.0800], device='cuda:3'), in_proj_covar=tensor([0.0408, 0.0468, 0.0626, 0.0370, 0.0357, 0.0430, 0.0456, 0.0586], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 02:36:04,375 INFO [train.py:898] (3/4) Epoch 11, batch 3050, loss[loss=0.1898, simple_loss=0.2762, pruned_loss=0.05172, over 18106.00 frames. ], tot_loss[loss=0.1861, simple_loss=0.2712, pruned_loss=0.05052, over 3599106.41 frames. ], batch size: 62, lr: 1.02e-02, grad_scale: 8.0 2023-03-09 02:36:12,191 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6129, 3.2921, 2.1686, 4.3204, 2.9329, 4.2393, 2.3626, 3.9049], device='cuda:3'), covar=tensor([0.0562, 0.0862, 0.1451, 0.0402, 0.0874, 0.0210, 0.1226, 0.0366], device='cuda:3'), in_proj_covar=tensor([0.0191, 0.0210, 0.0177, 0.0238, 0.0180, 0.0238, 0.0190, 0.0185], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 02:36:14,705 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.84 vs. limit=2.0 2023-03-09 02:37:04,067 INFO [train.py:898] (3/4) Epoch 11, batch 3100, loss[loss=0.1845, simple_loss=0.2718, pruned_loss=0.04861, over 18263.00 frames. ], tot_loss[loss=0.1866, simple_loss=0.2718, pruned_loss=0.05068, over 3599478.58 frames. ], batch size: 49, lr: 1.02e-02, grad_scale: 8.0 2023-03-09 02:37:12,136 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.117e+02 3.287e+02 3.708e+02 4.469e+02 1.141e+03, threshold=7.415e+02, percent-clipped=2.0 2023-03-09 02:37:18,707 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.78 vs. limit=2.0 2023-03-09 02:37:30,883 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=39463.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:37:49,490 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=39479.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:38:02,625 INFO [train.py:898] (3/4) Epoch 11, batch 3150, loss[loss=0.199, simple_loss=0.2904, pruned_loss=0.05381, over 18291.00 frames. ], tot_loss[loss=0.1873, simple_loss=0.2725, pruned_loss=0.05112, over 3581443.14 frames. ], batch size: 49, lr: 1.02e-02, grad_scale: 8.0 2023-03-09 02:38:03,995 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4904, 5.1652, 5.6237, 5.5732, 5.4227, 6.2316, 5.8736, 5.5438], device='cuda:3'), covar=tensor([0.1098, 0.0619, 0.0636, 0.0659, 0.1528, 0.0677, 0.0573, 0.1669], device='cuda:3'), in_proj_covar=tensor([0.0310, 0.0239, 0.0251, 0.0253, 0.0292, 0.0357, 0.0235, 0.0346], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0004, 0.0002, 0.0003], device='cuda:3') 2023-03-09 02:38:42,096 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=39524.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:38:45,268 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=39527.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:38:54,174 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8948, 4.9034, 4.9824, 4.7348, 4.7353, 4.7620, 5.1713, 5.1181], device='cuda:3'), covar=tensor([0.0063, 0.0072, 0.0062, 0.0103, 0.0060, 0.0109, 0.0058, 0.0090], device='cuda:3'), in_proj_covar=tensor([0.0082, 0.0057, 0.0061, 0.0077, 0.0063, 0.0088, 0.0074, 0.0073], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 02:39:02,009 INFO [train.py:898] (3/4) Epoch 11, batch 3200, loss[loss=0.1641, simple_loss=0.2495, pruned_loss=0.03931, over 18483.00 frames. ], tot_loss[loss=0.1861, simple_loss=0.2709, pruned_loss=0.05063, over 3581425.64 frames. ], batch size: 47, lr: 1.02e-02, grad_scale: 8.0 2023-03-09 02:39:02,205 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=39541.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:39:09,606 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.046e+02 3.172e+02 3.769e+02 4.644e+02 9.591e+02, threshold=7.537e+02, percent-clipped=4.0 2023-03-09 02:39:50,957 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-09 02:40:01,202 INFO [train.py:898] (3/4) Epoch 11, batch 3250, loss[loss=0.1676, simple_loss=0.2526, pruned_loss=0.04128, over 18530.00 frames. ], tot_loss[loss=0.1861, simple_loss=0.2709, pruned_loss=0.05071, over 3580146.77 frames. ], batch size: 49, lr: 1.02e-02, grad_scale: 8.0 2023-03-09 02:40:58,316 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=39639.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 02:40:59,374 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=39640.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:41:00,290 INFO [train.py:898] (3/4) Epoch 11, batch 3300, loss[loss=0.1769, simple_loss=0.271, pruned_loss=0.04141, over 18297.00 frames. ], tot_loss[loss=0.1874, simple_loss=0.272, pruned_loss=0.05134, over 3578105.88 frames. ], batch size: 54, lr: 1.02e-02, grad_scale: 8.0 2023-03-09 02:41:08,718 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.189e+02 3.111e+02 3.675e+02 4.353e+02 7.934e+02, threshold=7.351e+02, percent-clipped=2.0 2023-03-09 02:41:59,390 INFO [train.py:898] (3/4) Epoch 11, batch 3350, loss[loss=0.1771, simple_loss=0.2669, pruned_loss=0.04365, over 18403.00 frames. ], tot_loss[loss=0.187, simple_loss=0.272, pruned_loss=0.05099, over 3580515.22 frames. ], batch size: 52, lr: 1.02e-02, grad_scale: 8.0 2023-03-09 02:42:10,680 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=39700.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 02:42:47,200 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6718, 4.2253, 2.7087, 3.9589, 4.0297, 4.2222, 4.0881, 2.6685], device='cuda:3'), covar=tensor([0.0173, 0.0058, 0.0650, 0.0205, 0.0076, 0.0061, 0.0083, 0.0823], device='cuda:3'), in_proj_covar=tensor([0.0078, 0.0066, 0.0087, 0.0081, 0.0076, 0.0065, 0.0075, 0.0090], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 02:42:58,375 INFO [train.py:898] (3/4) Epoch 11, batch 3400, loss[loss=0.2034, simple_loss=0.2815, pruned_loss=0.0626, over 18356.00 frames. ], tot_loss[loss=0.188, simple_loss=0.2733, pruned_loss=0.05138, over 3578186.77 frames. ], batch size: 56, lr: 1.02e-02, grad_scale: 8.0 2023-03-09 02:43:00,267 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.85 vs. limit=5.0 2023-03-09 02:43:06,473 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.021e+02 3.201e+02 3.760e+02 4.727e+02 8.419e+02, threshold=7.521e+02, percent-clipped=1.0 2023-03-09 02:43:30,047 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=39767.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:43:57,997 INFO [train.py:898] (3/4) Epoch 11, batch 3450, loss[loss=0.1906, simple_loss=0.2791, pruned_loss=0.05102, over 17676.00 frames. ], tot_loss[loss=0.1871, simple_loss=0.2724, pruned_loss=0.05094, over 3590445.72 frames. ], batch size: 70, lr: 1.01e-02, grad_scale: 8.0 2023-03-09 02:44:25,734 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6093, 5.1692, 5.1358, 5.1524, 4.7334, 4.9867, 4.4646, 5.0394], device='cuda:3'), covar=tensor([0.0239, 0.0270, 0.0208, 0.0341, 0.0379, 0.0260, 0.1179, 0.0281], device='cuda:3'), in_proj_covar=tensor([0.0178, 0.0221, 0.0211, 0.0254, 0.0221, 0.0227, 0.0282, 0.0210], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 02:44:26,892 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=39815.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:44:31,382 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=39819.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:44:42,970 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=39828.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:44:57,115 INFO [train.py:898] (3/4) Epoch 11, batch 3500, loss[loss=0.2052, simple_loss=0.2856, pruned_loss=0.0624, over 18271.00 frames. ], tot_loss[loss=0.1876, simple_loss=0.2728, pruned_loss=0.0512, over 3585787.26 frames. ], batch size: 60, lr: 1.01e-02, grad_scale: 8.0 2023-03-09 02:44:57,412 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=39841.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:45:05,096 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.228e+02 3.240e+02 3.890e+02 4.468e+02 8.251e+02, threshold=7.780e+02, percent-clipped=2.0 2023-03-09 02:45:37,809 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=39876.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:45:52,236 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=39889.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:45:54,247 INFO [train.py:898] (3/4) Epoch 11, batch 3550, loss[loss=0.1972, simple_loss=0.2866, pruned_loss=0.05391, over 18129.00 frames. ], tot_loss[loss=0.1869, simple_loss=0.2719, pruned_loss=0.051, over 3578112.35 frames. ], batch size: 62, lr: 1.01e-02, grad_scale: 8.0 2023-03-09 02:46:48,046 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=39940.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:46:48,865 INFO [train.py:898] (3/4) Epoch 11, batch 3600, loss[loss=0.213, simple_loss=0.299, pruned_loss=0.06346, over 18216.00 frames. ], tot_loss[loss=0.1876, simple_loss=0.2729, pruned_loss=0.05114, over 3586864.14 frames. ], batch size: 60, lr: 1.01e-02, grad_scale: 8.0 2023-03-09 02:46:55,874 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.247e+02 3.210e+02 3.696e+02 4.808e+02 8.251e+02, threshold=7.392e+02, percent-clipped=2.0 2023-03-09 02:47:53,916 INFO [train.py:898] (3/4) Epoch 12, batch 0, loss[loss=0.1667, simple_loss=0.243, pruned_loss=0.04516, over 18425.00 frames. ], tot_loss[loss=0.1667, simple_loss=0.243, pruned_loss=0.04516, over 18425.00 frames. ], batch size: 43, lr: 9.70e-03, grad_scale: 8.0 2023-03-09 02:47:53,916 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 02:48:00,994 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6419, 4.2067, 2.5527, 4.2062, 3.9298, 4.2397, 4.0548, 2.3980], device='cuda:3'), covar=tensor([0.0194, 0.0067, 0.0761, 0.0087, 0.0098, 0.0064, 0.0116, 0.1037], device='cuda:3'), in_proj_covar=tensor([0.0079, 0.0067, 0.0088, 0.0082, 0.0077, 0.0066, 0.0076, 0.0092], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 02:48:04,721 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1811, 4.5214, 4.4833, 4.4219, 4.2406, 4.2207, 4.5503, 4.4892], device='cuda:3'), covar=tensor([0.0980, 0.0535, 0.0304, 0.0496, 0.1215, 0.0645, 0.0588, 0.0564], device='cuda:3'), in_proj_covar=tensor([0.0517, 0.0429, 0.0324, 0.0455, 0.0630, 0.0461, 0.0605, 0.0452], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 02:48:05,854 INFO [train.py:932] (3/4) Epoch 12, validation: loss=0.1577, simple_loss=0.2601, pruned_loss=0.02771, over 944034.00 frames. 2023-03-09 02:48:05,854 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 02:48:21,448 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=39988.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:48:29,613 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=39995.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 02:49:09,591 INFO [train.py:898] (3/4) Epoch 12, batch 50, loss[loss=0.192, simple_loss=0.2743, pruned_loss=0.05482, over 18268.00 frames. ], tot_loss[loss=0.1892, simple_loss=0.2735, pruned_loss=0.0525, over 807125.27 frames. ], batch size: 57, lr: 9.69e-03, grad_scale: 8.0 2023-03-09 02:49:35,919 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.813e+02 3.496e+02 4.076e+02 5.396e+02 1.029e+03, threshold=8.152e+02, percent-clipped=4.0 2023-03-09 02:50:01,632 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.13 vs. limit=5.0 2023-03-09 02:50:05,898 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7867, 4.7568, 4.7883, 4.5811, 4.4420, 4.5796, 4.9948, 4.9196], device='cuda:3'), covar=tensor([0.0057, 0.0064, 0.0067, 0.0086, 0.0063, 0.0116, 0.0063, 0.0083], device='cuda:3'), in_proj_covar=tensor([0.0079, 0.0056, 0.0060, 0.0075, 0.0062, 0.0087, 0.0073, 0.0071], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 02:50:08,400 INFO [train.py:898] (3/4) Epoch 12, batch 100, loss[loss=0.2073, simple_loss=0.2921, pruned_loss=0.06123, over 18358.00 frames. ], tot_loss[loss=0.1861, simple_loss=0.2711, pruned_loss=0.05058, over 1428811.18 frames. ], batch size: 55, lr: 9.69e-03, grad_scale: 4.0 2023-03-09 02:50:15,008 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.41 vs. limit=2.0 2023-03-09 02:50:31,559 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.48 vs. limit=2.0 2023-03-09 02:51:00,708 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40119.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:51:05,196 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40123.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:51:07,306 INFO [train.py:898] (3/4) Epoch 12, batch 150, loss[loss=0.1843, simple_loss=0.2684, pruned_loss=0.05014, over 18297.00 frames. ], tot_loss[loss=0.1856, simple_loss=0.2708, pruned_loss=0.05018, over 1912021.65 frames. ], batch size: 49, lr: 9.68e-03, grad_scale: 4.0 2023-03-09 02:51:36,223 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.055e+02 3.093e+02 3.775e+02 4.453e+02 9.107e+02, threshold=7.551e+02, percent-clipped=1.0 2023-03-09 02:51:57,431 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=40167.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:52:01,999 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40171.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:52:06,358 INFO [train.py:898] (3/4) Epoch 12, batch 200, loss[loss=0.1937, simple_loss=0.2862, pruned_loss=0.05056, over 18626.00 frames. ], tot_loss[loss=0.1854, simple_loss=0.2708, pruned_loss=0.05005, over 2288168.98 frames. ], batch size: 52, lr: 9.68e-03, grad_scale: 4.0 2023-03-09 02:52:29,900 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40195.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:52:58,286 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40218.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:53:05,984 INFO [train.py:898] (3/4) Epoch 12, batch 250, loss[loss=0.181, simple_loss=0.2684, pruned_loss=0.04678, over 17678.00 frames. ], tot_loss[loss=0.1854, simple_loss=0.2706, pruned_loss=0.05011, over 2574952.32 frames. ], batch size: 70, lr: 9.67e-03, grad_scale: 4.0 2023-03-09 02:53:33,647 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.098e+02 3.173e+02 3.973e+02 4.861e+02 1.364e+03, threshold=7.946e+02, percent-clipped=3.0 2023-03-09 02:53:42,349 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40256.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:54:01,792 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40272.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:54:05,002 INFO [train.py:898] (3/4) Epoch 12, batch 300, loss[loss=0.1811, simple_loss=0.2594, pruned_loss=0.05139, over 18155.00 frames. ], tot_loss[loss=0.1853, simple_loss=0.2707, pruned_loss=0.04998, over 2807816.06 frames. ], batch size: 44, lr: 9.66e-03, grad_scale: 4.0 2023-03-09 02:54:09,969 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40279.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:54:28,475 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40295.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 02:55:04,172 INFO [train.py:898] (3/4) Epoch 12, batch 350, loss[loss=0.1621, simple_loss=0.2465, pruned_loss=0.03883, over 18257.00 frames. ], tot_loss[loss=0.1848, simple_loss=0.2702, pruned_loss=0.0497, over 2996897.03 frames. ], batch size: 45, lr: 9.66e-03, grad_scale: 4.0 2023-03-09 02:55:10,146 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40330.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:55:13,452 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40333.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:55:15,777 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9254, 4.7046, 2.7951, 4.5306, 4.4623, 4.7299, 4.5111, 2.5449], device='cuda:3'), covar=tensor([0.0161, 0.0067, 0.0690, 0.0108, 0.0082, 0.0082, 0.0114, 0.0975], device='cuda:3'), in_proj_covar=tensor([0.0078, 0.0066, 0.0088, 0.0081, 0.0076, 0.0066, 0.0076, 0.0091], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 02:55:20,497 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6628, 2.8467, 2.5742, 2.7919, 3.6770, 3.4463, 3.0518, 2.9612], device='cuda:3'), covar=tensor([0.0169, 0.0322, 0.0588, 0.0389, 0.0150, 0.0172, 0.0345, 0.0339], device='cuda:3'), in_proj_covar=tensor([0.0122, 0.0110, 0.0150, 0.0138, 0.0105, 0.0094, 0.0137, 0.0130], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 02:55:24,808 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=40343.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 02:55:31,749 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.235e+02 3.021e+02 3.795e+02 4.582e+02 1.168e+03, threshold=7.590e+02, percent-clipped=5.0 2023-03-09 02:56:02,390 INFO [train.py:898] (3/4) Epoch 12, batch 400, loss[loss=0.1849, simple_loss=0.2698, pruned_loss=0.05, over 18504.00 frames. ], tot_loss[loss=0.185, simple_loss=0.2702, pruned_loss=0.04984, over 3143369.82 frames. ], batch size: 53, lr: 9.65e-03, grad_scale: 8.0 2023-03-09 02:56:21,661 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40391.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:56:59,541 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40423.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:57:01,660 INFO [train.py:898] (3/4) Epoch 12, batch 450, loss[loss=0.1694, simple_loss=0.2581, pruned_loss=0.04034, over 18499.00 frames. ], tot_loss[loss=0.1855, simple_loss=0.2705, pruned_loss=0.05022, over 3228975.76 frames. ], batch size: 51, lr: 9.65e-03, grad_scale: 8.0 2023-03-09 02:57:15,765 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9300, 4.3865, 4.6296, 3.3185, 3.6293, 3.4901, 2.5579, 2.3529], device='cuda:3'), covar=tensor([0.0203, 0.0157, 0.0062, 0.0294, 0.0334, 0.0219, 0.0751, 0.0892], device='cuda:3'), in_proj_covar=tensor([0.0059, 0.0048, 0.0047, 0.0060, 0.0079, 0.0057, 0.0071, 0.0079], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 02:57:24,543 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8567, 4.2805, 4.4175, 3.1884, 3.6582, 3.4206, 2.2583, 2.1445], device='cuda:3'), covar=tensor([0.0193, 0.0137, 0.0081, 0.0323, 0.0343, 0.0229, 0.0855, 0.0965], device='cuda:3'), in_proj_covar=tensor([0.0059, 0.0048, 0.0047, 0.0060, 0.0079, 0.0057, 0.0071, 0.0079], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 02:57:29,711 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.841e+02 3.105e+02 3.603e+02 4.102e+02 7.155e+02, threshold=7.205e+02, percent-clipped=0.0 2023-03-09 02:57:42,982 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1567, 4.3802, 2.4057, 4.2183, 5.2256, 2.4569, 3.8431, 3.9840], device='cuda:3'), covar=tensor([0.0106, 0.0966, 0.1636, 0.0575, 0.0058, 0.1493, 0.0705, 0.0664], device='cuda:3'), in_proj_covar=tensor([0.0120, 0.0236, 0.0190, 0.0190, 0.0092, 0.0179, 0.0202, 0.0207], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 02:57:55,828 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=40471.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:57:55,945 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40471.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:58:00,090 INFO [train.py:898] (3/4) Epoch 12, batch 500, loss[loss=0.1924, simple_loss=0.2757, pruned_loss=0.05456, over 18394.00 frames. ], tot_loss[loss=0.1869, simple_loss=0.2724, pruned_loss=0.0507, over 3307917.19 frames. ], batch size: 50, lr: 9.64e-03, grad_scale: 8.0 2023-03-09 02:58:02,700 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2033, 5.1563, 5.3207, 5.2213, 5.1395, 5.8668, 5.5222, 5.2457], device='cuda:3'), covar=tensor([0.0953, 0.0609, 0.0671, 0.0606, 0.1356, 0.0675, 0.0523, 0.1500], device='cuda:3'), in_proj_covar=tensor([0.0308, 0.0238, 0.0250, 0.0255, 0.0291, 0.0356, 0.0238, 0.0346], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0004, 0.0002, 0.0003], device='cuda:3') 2023-03-09 02:58:21,059 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4442, 3.2224, 1.7569, 4.3059, 2.7803, 4.0887, 2.2288, 3.6729], device='cuda:3'), covar=tensor([0.0590, 0.0895, 0.1752, 0.0483, 0.0914, 0.0253, 0.1420, 0.0469], device='cuda:3'), in_proj_covar=tensor([0.0192, 0.0215, 0.0179, 0.0240, 0.0181, 0.0243, 0.0193, 0.0185], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 02:58:21,064 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40492.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:58:52,458 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=40519.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:58:59,225 INFO [train.py:898] (3/4) Epoch 12, batch 550, loss[loss=0.1923, simple_loss=0.2827, pruned_loss=0.05102, over 18492.00 frames. ], tot_loss[loss=0.1865, simple_loss=0.2721, pruned_loss=0.05043, over 3372577.43 frames. ], batch size: 51, lr: 9.63e-03, grad_scale: 8.0 2023-03-09 02:59:26,874 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.116e+02 3.269e+02 4.036e+02 4.725e+02 9.923e+02, threshold=8.073e+02, percent-clipped=3.0 2023-03-09 02:59:29,413 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40551.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:59:31,869 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40553.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:59:56,941 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40574.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 02:59:57,875 INFO [train.py:898] (3/4) Epoch 12, batch 600, loss[loss=0.1805, simple_loss=0.2658, pruned_loss=0.04765, over 18297.00 frames. ], tot_loss[loss=0.1859, simple_loss=0.2713, pruned_loss=0.05024, over 3425457.93 frames. ], batch size: 49, lr: 9.63e-03, grad_scale: 4.0 2023-03-09 03:00:23,843 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.77 vs. limit=5.0 2023-03-09 03:00:56,595 INFO [train.py:898] (3/4) Epoch 12, batch 650, loss[loss=0.2025, simple_loss=0.2847, pruned_loss=0.06013, over 18295.00 frames. ], tot_loss[loss=0.1858, simple_loss=0.2709, pruned_loss=0.05038, over 3446318.87 frames. ], batch size: 57, lr: 9.62e-03, grad_scale: 4.0 2023-03-09 03:01:00,852 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40628.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:01:26,542 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.880e+02 2.745e+02 3.552e+02 4.267e+02 1.309e+03, threshold=7.104e+02, percent-clipped=1.0 2023-03-09 03:01:28,049 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40651.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:01:54,736 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0665, 4.6312, 4.7788, 3.3412, 3.7961, 3.6014, 2.8304, 2.4737], device='cuda:3'), covar=tensor([0.0179, 0.0113, 0.0054, 0.0289, 0.0280, 0.0199, 0.0654, 0.0870], device='cuda:3'), in_proj_covar=tensor([0.0059, 0.0048, 0.0048, 0.0060, 0.0079, 0.0057, 0.0072, 0.0079], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 03:01:55,471 INFO [train.py:898] (3/4) Epoch 12, batch 700, loss[loss=0.1596, simple_loss=0.2436, pruned_loss=0.03783, over 18244.00 frames. ], tot_loss[loss=0.1863, simple_loss=0.2715, pruned_loss=0.05056, over 3476144.83 frames. ], batch size: 45, lr: 9.62e-03, grad_scale: 4.0 2023-03-09 03:02:08,735 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40686.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:02:26,926 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6745, 3.5536, 3.4094, 3.0158, 3.3954, 2.8256, 2.7504, 3.6645], device='cuda:3'), covar=tensor([0.0036, 0.0068, 0.0068, 0.0110, 0.0072, 0.0150, 0.0148, 0.0046], device='cuda:3'), in_proj_covar=tensor([0.0098, 0.0122, 0.0105, 0.0154, 0.0107, 0.0151, 0.0156, 0.0089], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0001, 0.0002, 0.0001, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 03:02:39,504 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40712.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:02:54,260 INFO [train.py:898] (3/4) Epoch 12, batch 750, loss[loss=0.1877, simple_loss=0.2764, pruned_loss=0.04948, over 18177.00 frames. ], tot_loss[loss=0.1863, simple_loss=0.2715, pruned_loss=0.05049, over 3500064.30 frames. ], batch size: 62, lr: 9.61e-03, grad_scale: 4.0 2023-03-09 03:03:25,291 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.144e+02 3.250e+02 3.776e+02 4.415e+02 7.567e+02, threshold=7.551e+02, percent-clipped=1.0 2023-03-09 03:03:52,298 INFO [train.py:898] (3/4) Epoch 12, batch 800, loss[loss=0.1839, simple_loss=0.2781, pruned_loss=0.04489, over 18625.00 frames. ], tot_loss[loss=0.1857, simple_loss=0.2711, pruned_loss=0.05008, over 3532730.98 frames. ], batch size: 52, lr: 9.61e-03, grad_scale: 4.0 2023-03-09 03:04:14,251 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7009, 2.1360, 2.6850, 2.7259, 3.4934, 5.2503, 4.7937, 4.0718], device='cuda:3'), covar=tensor([0.1253, 0.2071, 0.2322, 0.1383, 0.1662, 0.0108, 0.0319, 0.0492], device='cuda:3'), in_proj_covar=tensor([0.0247, 0.0304, 0.0322, 0.0250, 0.0366, 0.0189, 0.0262, 0.0208], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 03:04:51,187 INFO [train.py:898] (3/4) Epoch 12, batch 850, loss[loss=0.1834, simple_loss=0.2712, pruned_loss=0.04785, over 18535.00 frames. ], tot_loss[loss=0.1852, simple_loss=0.2708, pruned_loss=0.0498, over 3556859.30 frames. ], batch size: 49, lr: 9.60e-03, grad_scale: 4.0 2023-03-09 03:05:18,003 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40848.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:05:21,108 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.184e+02 3.104e+02 3.641e+02 4.538e+02 1.053e+03, threshold=7.281e+02, percent-clipped=3.0 2023-03-09 03:05:21,497 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40851.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:05:38,554 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40865.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:05:41,010 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8294, 3.6891, 5.3999, 3.2304, 4.6997, 2.7325, 3.1618, 2.0709], device='cuda:3'), covar=tensor([0.0916, 0.0768, 0.0064, 0.0551, 0.0397, 0.2117, 0.2127, 0.1639], device='cuda:3'), in_proj_covar=tensor([0.0203, 0.0220, 0.0124, 0.0174, 0.0235, 0.0252, 0.0294, 0.0216], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 03:05:48,726 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40874.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:05:49,594 INFO [train.py:898] (3/4) Epoch 12, batch 900, loss[loss=0.1901, simple_loss=0.2801, pruned_loss=0.05, over 18358.00 frames. ], tot_loss[loss=0.1861, simple_loss=0.2716, pruned_loss=0.05026, over 3552565.59 frames. ], batch size: 55, lr: 9.59e-03, grad_scale: 4.0 2023-03-09 03:06:18,246 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=40899.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:06:19,487 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40900.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:06:45,187 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=40922.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:06:48,427 INFO [train.py:898] (3/4) Epoch 12, batch 950, loss[loss=0.1864, simple_loss=0.274, pruned_loss=0.04942, over 17786.00 frames. ], tot_loss[loss=0.1859, simple_loss=0.2714, pruned_loss=0.05018, over 3556992.72 frames. ], batch size: 70, lr: 9.59e-03, grad_scale: 4.0 2023-03-09 03:06:49,998 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40926.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:06:51,098 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.2803, 5.4638, 3.1460, 5.2679, 5.1997, 5.5447, 5.2628, 2.9863], device='cuda:3'), covar=tensor([0.0139, 0.0053, 0.0656, 0.0069, 0.0062, 0.0054, 0.0090, 0.0882], device='cuda:3'), in_proj_covar=tensor([0.0079, 0.0068, 0.0088, 0.0083, 0.0075, 0.0067, 0.0078, 0.0092], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 03:06:52,162 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40928.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:07:18,909 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.424e+02 3.344e+02 4.191e+02 5.324e+02 1.167e+03, threshold=8.382e+02, percent-clipped=6.0 2023-03-09 03:07:31,585 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40961.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 03:07:46,840 INFO [train.py:898] (3/4) Epoch 12, batch 1000, loss[loss=0.2381, simple_loss=0.3099, pruned_loss=0.08316, over 12484.00 frames. ], tot_loss[loss=0.1874, simple_loss=0.2731, pruned_loss=0.05085, over 3560343.35 frames. ], batch size: 129, lr: 9.58e-03, grad_scale: 4.0 2023-03-09 03:07:48,010 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=40976.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:07:59,347 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40986.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:08:11,868 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40997.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:08:24,351 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=41007.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:08:33,836 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.40 vs. limit=5.0 2023-03-09 03:08:45,141 INFO [train.py:898] (3/4) Epoch 12, batch 1050, loss[loss=0.1555, simple_loss=0.2424, pruned_loss=0.03424, over 18275.00 frames. ], tot_loss[loss=0.1859, simple_loss=0.2714, pruned_loss=0.05019, over 3559961.45 frames. ], batch size: 49, lr: 9.58e-03, grad_scale: 4.0 2023-03-09 03:08:55,721 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=41034.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:09:08,597 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=41045.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:09:14,640 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-09 03:09:14,995 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.197e+02 3.189e+02 3.617e+02 4.210e+02 9.012e+02, threshold=7.234e+02, percent-clipped=1.0 2023-03-09 03:09:24,039 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=41058.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:09:43,919 INFO [train.py:898] (3/4) Epoch 12, batch 1100, loss[loss=0.1867, simple_loss=0.2859, pruned_loss=0.04371, over 18108.00 frames. ], tot_loss[loss=0.1861, simple_loss=0.2715, pruned_loss=0.05033, over 3563641.76 frames. ], batch size: 62, lr: 9.57e-03, grad_scale: 4.0 2023-03-09 03:10:08,790 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.59 vs. limit=2.0 2023-03-09 03:10:15,218 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=41102.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:10:19,930 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=41106.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:10:41,770 INFO [train.py:898] (3/4) Epoch 12, batch 1150, loss[loss=0.1881, simple_loss=0.2787, pruned_loss=0.04877, over 18301.00 frames. ], tot_loss[loss=0.1859, simple_loss=0.2715, pruned_loss=0.05019, over 3572021.76 frames. ], batch size: 57, lr: 9.56e-03, grad_scale: 4.0 2023-03-09 03:10:56,594 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4876, 2.0266, 2.5733, 2.5217, 3.2960, 5.0477, 4.5444, 3.9603], device='cuda:3'), covar=tensor([0.1345, 0.2113, 0.2426, 0.1518, 0.1838, 0.0116, 0.0367, 0.0517], device='cuda:3'), in_proj_covar=tensor([0.0249, 0.0307, 0.0327, 0.0252, 0.0370, 0.0191, 0.0266, 0.0210], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 03:11:07,784 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41148.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:11:11,015 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.262e+02 3.020e+02 3.568e+02 4.440e+02 1.142e+03, threshold=7.137e+02, percent-clipped=4.0 2023-03-09 03:11:14,520 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.69 vs. limit=2.0 2023-03-09 03:11:25,811 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=41163.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:11:39,956 INFO [train.py:898] (3/4) Epoch 12, batch 1200, loss[loss=0.205, simple_loss=0.2873, pruned_loss=0.06132, over 18248.00 frames. ], tot_loss[loss=0.1857, simple_loss=0.2711, pruned_loss=0.05018, over 3583067.59 frames. ], batch size: 60, lr: 9.56e-03, grad_scale: 8.0 2023-03-09 03:12:03,586 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=41196.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:12:22,423 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.31 vs. limit=5.0 2023-03-09 03:12:34,352 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=41221.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:12:38,717 INFO [train.py:898] (3/4) Epoch 12, batch 1250, loss[loss=0.16, simple_loss=0.249, pruned_loss=0.03546, over 18276.00 frames. ], tot_loss[loss=0.1854, simple_loss=0.2711, pruned_loss=0.04986, over 3592588.11 frames. ], batch size: 47, lr: 9.55e-03, grad_scale: 8.0 2023-03-09 03:13:05,462 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9196, 4.9073, 4.9657, 4.7158, 4.6006, 4.6651, 5.0769, 5.0503], device='cuda:3'), covar=tensor([0.0062, 0.0069, 0.0064, 0.0106, 0.0066, 0.0136, 0.0066, 0.0100], device='cuda:3'), in_proj_covar=tensor([0.0082, 0.0057, 0.0061, 0.0077, 0.0064, 0.0090, 0.0074, 0.0074], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 03:13:08,570 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.122e+02 3.147e+02 3.624e+02 4.404e+02 8.408e+02, threshold=7.247e+02, percent-clipped=2.0 2023-03-09 03:13:14,478 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=41256.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 03:13:37,215 INFO [train.py:898] (3/4) Epoch 12, batch 1300, loss[loss=0.2017, simple_loss=0.2857, pruned_loss=0.0589, over 17128.00 frames. ], tot_loss[loss=0.1855, simple_loss=0.2713, pruned_loss=0.04988, over 3598964.94 frames. ], batch size: 78, lr: 9.55e-03, grad_scale: 8.0 2023-03-09 03:13:50,327 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=5.15 vs. limit=5.0 2023-03-09 03:14:00,450 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0178, 3.7041, 5.1557, 2.9793, 4.5434, 2.8108, 3.0950, 1.9781], device='cuda:3'), covar=tensor([0.0907, 0.0789, 0.0083, 0.0705, 0.0446, 0.2140, 0.2425, 0.1764], device='cuda:3'), in_proj_covar=tensor([0.0201, 0.0219, 0.0122, 0.0174, 0.0232, 0.0251, 0.0289, 0.0214], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 03:14:14,013 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41307.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:14:19,787 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-09 03:14:35,259 INFO [train.py:898] (3/4) Epoch 12, batch 1350, loss[loss=0.1518, simple_loss=0.235, pruned_loss=0.03425, over 18445.00 frames. ], tot_loss[loss=0.1851, simple_loss=0.2705, pruned_loss=0.04985, over 3595901.81 frames. ], batch size: 43, lr: 9.54e-03, grad_scale: 8.0 2023-03-09 03:15:05,738 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.782e+02 3.011e+02 3.906e+02 4.787e+02 1.227e+03, threshold=7.812e+02, percent-clipped=6.0 2023-03-09 03:15:08,207 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=41353.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:15:10,506 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=41355.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:15:34,028 INFO [train.py:898] (3/4) Epoch 12, batch 1400, loss[loss=0.1906, simple_loss=0.275, pruned_loss=0.05312, over 16003.00 frames. ], tot_loss[loss=0.185, simple_loss=0.2703, pruned_loss=0.04979, over 3597556.25 frames. ], batch size: 94, lr: 9.54e-03, grad_scale: 8.0 2023-03-09 03:16:04,798 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=41401.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:16:32,785 INFO [train.py:898] (3/4) Epoch 12, batch 1450, loss[loss=0.1891, simple_loss=0.2821, pruned_loss=0.04807, over 18619.00 frames. ], tot_loss[loss=0.1852, simple_loss=0.2708, pruned_loss=0.04984, over 3584267.71 frames. ], batch size: 52, lr: 9.53e-03, grad_scale: 8.0 2023-03-09 03:16:50,118 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6272, 2.1376, 2.5916, 2.7480, 3.3157, 4.9616, 4.5030, 3.8235], device='cuda:3'), covar=tensor([0.1275, 0.2061, 0.2542, 0.1375, 0.1762, 0.0121, 0.0353, 0.0545], device='cuda:3'), in_proj_covar=tensor([0.0247, 0.0304, 0.0324, 0.0248, 0.0364, 0.0189, 0.0262, 0.0207], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 03:17:03,286 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.105e+02 2.980e+02 3.633e+02 4.455e+02 8.287e+02, threshold=7.266e+02, percent-clipped=1.0 2023-03-09 03:17:11,395 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=41458.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:17:31,247 INFO [train.py:898] (3/4) Epoch 12, batch 1500, loss[loss=0.1586, simple_loss=0.24, pruned_loss=0.03865, over 18499.00 frames. ], tot_loss[loss=0.1852, simple_loss=0.2707, pruned_loss=0.04986, over 3585830.11 frames. ], batch size: 44, lr: 9.52e-03, grad_scale: 8.0 2023-03-09 03:18:24,546 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41521.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:18:28,654 INFO [train.py:898] (3/4) Epoch 12, batch 1550, loss[loss=0.224, simple_loss=0.3017, pruned_loss=0.07315, over 12439.00 frames. ], tot_loss[loss=0.1856, simple_loss=0.2713, pruned_loss=0.04997, over 3585431.34 frames. ], batch size: 129, lr: 9.52e-03, grad_scale: 8.0 2023-03-09 03:19:00,403 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.006e+02 3.193e+02 3.688e+02 4.708e+02 1.427e+03, threshold=7.376e+02, percent-clipped=3.0 2023-03-09 03:19:06,264 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41556.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 03:19:14,207 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6435, 3.0852, 2.1761, 2.9190, 3.6951, 3.6091, 3.1394, 3.2398], device='cuda:3'), covar=tensor([0.0198, 0.0196, 0.0647, 0.0252, 0.0156, 0.0139, 0.0265, 0.0232], device='cuda:3'), in_proj_covar=tensor([0.0119, 0.0107, 0.0148, 0.0136, 0.0105, 0.0091, 0.0134, 0.0128], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 03:19:20,754 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=41569.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:19:27,333 INFO [train.py:898] (3/4) Epoch 12, batch 1600, loss[loss=0.2421, simple_loss=0.3112, pruned_loss=0.08649, over 12164.00 frames. ], tot_loss[loss=0.1863, simple_loss=0.2722, pruned_loss=0.05017, over 3576841.52 frames. ], batch size: 129, lr: 9.51e-03, grad_scale: 8.0 2023-03-09 03:19:30,595 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.42 vs. limit=2.0 2023-03-09 03:20:03,010 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=41604.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:20:20,284 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0215, 3.8929, 5.1476, 3.0968, 4.4017, 2.7231, 3.1386, 1.7602], device='cuda:3'), covar=tensor([0.0884, 0.0736, 0.0081, 0.0691, 0.0508, 0.2073, 0.2361, 0.1904], device='cuda:3'), in_proj_covar=tensor([0.0198, 0.0221, 0.0123, 0.0175, 0.0232, 0.0248, 0.0291, 0.0212], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 03:20:26,593 INFO [train.py:898] (3/4) Epoch 12, batch 1650, loss[loss=0.1525, simple_loss=0.2334, pruned_loss=0.03578, over 18431.00 frames. ], tot_loss[loss=0.1859, simple_loss=0.2717, pruned_loss=0.05011, over 3573055.63 frames. ], batch size: 43, lr: 9.51e-03, grad_scale: 8.0 2023-03-09 03:20:58,271 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.922e+02 3.190e+02 3.689e+02 4.449e+02 1.002e+03, threshold=7.378e+02, percent-clipped=1.0 2023-03-09 03:21:00,828 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41653.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:21:01,984 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=41654.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:21:25,759 INFO [train.py:898] (3/4) Epoch 12, batch 1700, loss[loss=0.1822, simple_loss=0.2707, pruned_loss=0.04684, over 18349.00 frames. ], tot_loss[loss=0.1858, simple_loss=0.2714, pruned_loss=0.05013, over 3577007.01 frames. ], batch size: 55, lr: 9.50e-03, grad_scale: 8.0 2023-03-09 03:21:57,766 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=41701.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:21:57,914 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41701.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:22:13,890 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=41715.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 03:22:20,514 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1236, 5.4633, 3.0039, 5.2645, 5.2870, 5.5372, 5.2986, 2.6191], device='cuda:3'), covar=tensor([0.0169, 0.0063, 0.0677, 0.0080, 0.0068, 0.0063, 0.0089, 0.1018], device='cuda:3'), in_proj_covar=tensor([0.0080, 0.0068, 0.0088, 0.0084, 0.0077, 0.0066, 0.0077, 0.0092], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 03:22:24,611 INFO [train.py:898] (3/4) Epoch 12, batch 1750, loss[loss=0.2028, simple_loss=0.2902, pruned_loss=0.05776, over 17964.00 frames. ], tot_loss[loss=0.1867, simple_loss=0.2721, pruned_loss=0.05063, over 3566238.80 frames. ], batch size: 65, lr: 9.50e-03, grad_scale: 8.0 2023-03-09 03:22:54,139 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=41749.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:22:56,202 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.910e+02 3.102e+02 3.582e+02 4.165e+02 6.416e+02, threshold=7.165e+02, percent-clipped=1.0 2023-03-09 03:23:04,325 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41758.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:23:20,477 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.88 vs. limit=5.0 2023-03-09 03:23:22,839 INFO [train.py:898] (3/4) Epoch 12, batch 1800, loss[loss=0.1688, simple_loss=0.2538, pruned_loss=0.04193, over 18533.00 frames. ], tot_loss[loss=0.1868, simple_loss=0.2726, pruned_loss=0.05052, over 3571257.56 frames. ], batch size: 49, lr: 9.49e-03, grad_scale: 8.0 2023-03-09 03:23:59,582 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=41806.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:24:20,774 INFO [train.py:898] (3/4) Epoch 12, batch 1850, loss[loss=0.1697, simple_loss=0.2546, pruned_loss=0.04234, over 18630.00 frames. ], tot_loss[loss=0.1861, simple_loss=0.272, pruned_loss=0.05011, over 3574847.91 frames. ], batch size: 48, lr: 9.49e-03, grad_scale: 8.0 2023-03-09 03:24:51,661 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.068e+02 3.165e+02 3.584e+02 4.367e+02 1.390e+03, threshold=7.168e+02, percent-clipped=5.0 2023-03-09 03:25:19,163 INFO [train.py:898] (3/4) Epoch 12, batch 1900, loss[loss=0.2046, simple_loss=0.2932, pruned_loss=0.05797, over 17726.00 frames. ], tot_loss[loss=0.1852, simple_loss=0.2708, pruned_loss=0.04976, over 3580179.69 frames. ], batch size: 70, lr: 9.48e-03, grad_scale: 8.0 2023-03-09 03:25:25,319 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9387, 3.8380, 5.0093, 3.1385, 4.3892, 2.6475, 3.1450, 1.9288], device='cuda:3'), covar=tensor([0.0988, 0.0794, 0.0108, 0.0710, 0.0524, 0.2264, 0.2416, 0.1849], device='cuda:3'), in_proj_covar=tensor([0.0198, 0.0220, 0.0123, 0.0173, 0.0231, 0.0246, 0.0291, 0.0211], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 03:25:56,957 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=41906.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:26:06,861 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8037, 5.2463, 5.3018, 5.3909, 4.8309, 5.2413, 4.1969, 5.1453], device='cuda:3'), covar=tensor([0.0258, 0.0479, 0.0272, 0.0364, 0.0398, 0.0288, 0.1661, 0.0339], device='cuda:3'), in_proj_covar=tensor([0.0178, 0.0225, 0.0214, 0.0258, 0.0226, 0.0227, 0.0286, 0.0215], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 03:26:17,858 INFO [train.py:898] (3/4) Epoch 12, batch 1950, loss[loss=0.1896, simple_loss=0.2765, pruned_loss=0.05135, over 18346.00 frames. ], tot_loss[loss=0.1851, simple_loss=0.2703, pruned_loss=0.0499, over 3571978.14 frames. ], batch size: 56, lr: 9.47e-03, grad_scale: 8.0 2023-03-09 03:26:26,163 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8377, 3.8294, 3.4829, 3.2614, 3.5939, 2.9596, 2.8036, 3.8687], device='cuda:3'), covar=tensor([0.0041, 0.0060, 0.0076, 0.0127, 0.0068, 0.0162, 0.0181, 0.0053], device='cuda:3'), in_proj_covar=tensor([0.0100, 0.0125, 0.0108, 0.0158, 0.0111, 0.0156, 0.0160, 0.0091], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 03:26:44,591 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=41948.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 03:26:47,526 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.082e+02 3.043e+02 3.852e+02 4.648e+02 1.107e+03, threshold=7.704e+02, percent-clipped=2.0 2023-03-09 03:27:07,438 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=41967.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:27:16,277 INFO [train.py:898] (3/4) Epoch 12, batch 2000, loss[loss=0.164, simple_loss=0.2469, pruned_loss=0.04058, over 18392.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.2691, pruned_loss=0.04951, over 3575624.99 frames. ], batch size: 42, lr: 9.47e-03, grad_scale: 8.0 2023-03-09 03:28:02,151 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=42009.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 03:28:03,137 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=42010.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 03:28:20,366 INFO [train.py:898] (3/4) Epoch 12, batch 2050, loss[loss=0.1847, simple_loss=0.2711, pruned_loss=0.04912, over 18362.00 frames. ], tot_loss[loss=0.1843, simple_loss=0.2693, pruned_loss=0.04969, over 3570594.31 frames. ], batch size: 50, lr: 9.46e-03, grad_scale: 8.0 2023-03-09 03:28:50,573 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.053e+02 3.014e+02 3.595e+02 4.305e+02 8.922e+02, threshold=7.191e+02, percent-clipped=2.0 2023-03-09 03:28:57,471 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5308, 2.1248, 2.5755, 2.5442, 3.2732, 4.8462, 4.4444, 3.6721], device='cuda:3'), covar=tensor([0.1351, 0.2059, 0.2326, 0.1473, 0.1798, 0.0124, 0.0381, 0.0587], device='cuda:3'), in_proj_covar=tensor([0.0247, 0.0306, 0.0324, 0.0248, 0.0363, 0.0191, 0.0262, 0.0207], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 03:29:01,158 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2598, 2.5697, 2.3446, 2.5875, 3.2580, 3.1662, 2.9151, 2.7571], device='cuda:3'), covar=tensor([0.0202, 0.0288, 0.0617, 0.0356, 0.0244, 0.0154, 0.0369, 0.0368], device='cuda:3'), in_proj_covar=tensor([0.0119, 0.0109, 0.0150, 0.0136, 0.0107, 0.0091, 0.0134, 0.0130], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 03:29:19,680 INFO [train.py:898] (3/4) Epoch 12, batch 2100, loss[loss=0.2054, simple_loss=0.2951, pruned_loss=0.05784, over 18076.00 frames. ], tot_loss[loss=0.1849, simple_loss=0.2701, pruned_loss=0.0498, over 3580784.04 frames. ], batch size: 62, lr: 9.46e-03, grad_scale: 4.0 2023-03-09 03:29:32,738 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4621, 5.9848, 5.3868, 5.7982, 5.5771, 5.4815, 6.0124, 6.0130], device='cuda:3'), covar=tensor([0.1205, 0.0745, 0.0464, 0.0740, 0.1525, 0.0734, 0.0646, 0.0711], device='cuda:3'), in_proj_covar=tensor([0.0538, 0.0452, 0.0341, 0.0477, 0.0657, 0.0484, 0.0635, 0.0471], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 03:30:18,696 INFO [train.py:898] (3/4) Epoch 12, batch 2150, loss[loss=0.1597, simple_loss=0.239, pruned_loss=0.04016, over 18397.00 frames. ], tot_loss[loss=0.1836, simple_loss=0.269, pruned_loss=0.04907, over 3582144.71 frames. ], batch size: 42, lr: 9.45e-03, grad_scale: 4.0 2023-03-09 03:30:49,400 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.910e+02 3.158e+02 3.739e+02 4.529e+02 7.533e+02, threshold=7.477e+02, percent-clipped=1.0 2023-03-09 03:31:14,115 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7167, 2.9527, 2.6586, 2.9361, 3.6878, 3.7688, 3.1782, 3.2103], device='cuda:3'), covar=tensor([0.0170, 0.0279, 0.0505, 0.0312, 0.0179, 0.0110, 0.0326, 0.0298], device='cuda:3'), in_proj_covar=tensor([0.0120, 0.0109, 0.0150, 0.0137, 0.0107, 0.0091, 0.0135, 0.0131], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 03:31:17,039 INFO [train.py:898] (3/4) Epoch 12, batch 2200, loss[loss=0.1793, simple_loss=0.2562, pruned_loss=0.05118, over 18394.00 frames. ], tot_loss[loss=0.1834, simple_loss=0.2689, pruned_loss=0.04898, over 3594574.78 frames. ], batch size: 42, lr: 9.45e-03, grad_scale: 4.0 2023-03-09 03:31:32,213 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1916, 5.1361, 2.6277, 5.0611, 4.9045, 5.1418, 4.9692, 2.4063], device='cuda:3'), covar=tensor([0.0171, 0.0129, 0.0938, 0.0109, 0.0102, 0.0153, 0.0152, 0.1499], device='cuda:3'), in_proj_covar=tensor([0.0079, 0.0068, 0.0088, 0.0084, 0.0077, 0.0067, 0.0077, 0.0092], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 03:32:08,357 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4534, 2.1386, 2.0390, 2.1545, 2.5261, 2.4850, 2.3084, 2.1836], device='cuda:3'), covar=tensor([0.0168, 0.0205, 0.0442, 0.0346, 0.0197, 0.0141, 0.0347, 0.0279], device='cuda:3'), in_proj_covar=tensor([0.0120, 0.0111, 0.0152, 0.0137, 0.0107, 0.0092, 0.0135, 0.0132], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 03:32:15,656 INFO [train.py:898] (3/4) Epoch 12, batch 2250, loss[loss=0.1646, simple_loss=0.2467, pruned_loss=0.04124, over 18496.00 frames. ], tot_loss[loss=0.1831, simple_loss=0.2685, pruned_loss=0.04888, over 3581254.15 frames. ], batch size: 47, lr: 9.44e-03, grad_scale: 4.0 2023-03-09 03:32:39,356 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.96 vs. limit=5.0 2023-03-09 03:32:46,342 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.218e+02 3.002e+02 3.427e+02 4.398e+02 7.070e+02, threshold=6.854e+02, percent-clipped=0.0 2023-03-09 03:32:58,286 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=42262.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:33:06,632 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3161, 5.2378, 5.4759, 5.4271, 5.2997, 6.0173, 5.6319, 5.4690], device='cuda:3'), covar=tensor([0.1087, 0.0604, 0.0705, 0.0670, 0.1338, 0.0737, 0.0597, 0.1529], device='cuda:3'), in_proj_covar=tensor([0.0312, 0.0237, 0.0252, 0.0255, 0.0295, 0.0360, 0.0239, 0.0349], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003], device='cuda:3') 2023-03-09 03:33:14,286 INFO [train.py:898] (3/4) Epoch 12, batch 2300, loss[loss=0.1968, simple_loss=0.2884, pruned_loss=0.05259, over 17808.00 frames. ], tot_loss[loss=0.1836, simple_loss=0.2692, pruned_loss=0.04906, over 3581236.73 frames. ], batch size: 70, lr: 9.44e-03, grad_scale: 4.0 2023-03-09 03:33:24,718 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5639, 5.4960, 5.0980, 5.5384, 5.3905, 4.8169, 5.3886, 5.1538], device='cuda:3'), covar=tensor([0.0444, 0.0409, 0.1439, 0.0782, 0.0651, 0.0425, 0.0438, 0.0943], device='cuda:3'), in_proj_covar=tensor([0.0413, 0.0477, 0.0630, 0.0370, 0.0356, 0.0428, 0.0459, 0.0592], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 03:33:33,738 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2530, 5.2119, 4.7838, 5.2308, 5.1192, 4.5444, 5.0905, 4.8036], device='cuda:3'), covar=tensor([0.0463, 0.0437, 0.1585, 0.0739, 0.0656, 0.0464, 0.0464, 0.1148], device='cuda:3'), in_proj_covar=tensor([0.0415, 0.0479, 0.0634, 0.0372, 0.0358, 0.0430, 0.0460, 0.0596], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 03:33:47,384 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=42304.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 03:33:54,935 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=42310.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 03:34:13,011 INFO [train.py:898] (3/4) Epoch 12, batch 2350, loss[loss=0.1924, simple_loss=0.2804, pruned_loss=0.05217, over 18386.00 frames. ], tot_loss[loss=0.1835, simple_loss=0.2691, pruned_loss=0.049, over 3588788.73 frames. ], batch size: 50, lr: 9.43e-03, grad_scale: 4.0 2023-03-09 03:34:36,867 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.6419, 5.5187, 5.1522, 5.5463, 5.5292, 4.9450, 5.4903, 5.2287], device='cuda:3'), covar=tensor([0.0353, 0.0400, 0.1433, 0.0766, 0.0463, 0.0385, 0.0380, 0.0864], device='cuda:3'), in_proj_covar=tensor([0.0415, 0.0485, 0.0640, 0.0376, 0.0361, 0.0434, 0.0466, 0.0601], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 03:34:43,295 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.195e+02 3.349e+02 3.901e+02 4.944e+02 8.097e+02, threshold=7.803e+02, percent-clipped=6.0 2023-03-09 03:34:50,326 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=42358.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:35:10,383 INFO [train.py:898] (3/4) Epoch 12, batch 2400, loss[loss=0.1929, simple_loss=0.2808, pruned_loss=0.05243, over 18488.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2679, pruned_loss=0.04858, over 3590203.77 frames. ], batch size: 53, lr: 9.42e-03, grad_scale: 8.0 2023-03-09 03:35:15,879 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=42379.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:36:05,753 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.11 vs. limit=5.0 2023-03-09 03:36:08,415 INFO [train.py:898] (3/4) Epoch 12, batch 2450, loss[loss=0.1714, simple_loss=0.2645, pruned_loss=0.03918, over 18500.00 frames. ], tot_loss[loss=0.1824, simple_loss=0.2678, pruned_loss=0.04849, over 3590922.44 frames. ], batch size: 51, lr: 9.42e-03, grad_scale: 8.0 2023-03-09 03:36:27,202 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=42440.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:36:40,613 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.166e+02 3.219e+02 3.774e+02 4.354e+02 8.177e+02, threshold=7.548e+02, percent-clipped=1.0 2023-03-09 03:37:07,531 INFO [train.py:898] (3/4) Epoch 12, batch 2500, loss[loss=0.1792, simple_loss=0.2695, pruned_loss=0.04441, over 18273.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2679, pruned_loss=0.04853, over 3588133.42 frames. ], batch size: 47, lr: 9.41e-03, grad_scale: 8.0 2023-03-09 03:37:36,042 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.44 vs. limit=2.0 2023-03-09 03:37:44,349 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=42506.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:38:06,075 INFO [train.py:898] (3/4) Epoch 12, batch 2550, loss[loss=0.1795, simple_loss=0.2662, pruned_loss=0.04644, over 18350.00 frames. ], tot_loss[loss=0.1822, simple_loss=0.2676, pruned_loss=0.04839, over 3585617.03 frames. ], batch size: 55, lr: 9.41e-03, grad_scale: 8.0 2023-03-09 03:38:31,132 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8147, 2.8455, 4.3181, 4.0347, 2.4487, 4.5686, 4.1215, 2.7046], device='cuda:3'), covar=tensor([0.0384, 0.1529, 0.0245, 0.0245, 0.1713, 0.0209, 0.0386, 0.1203], device='cuda:3'), in_proj_covar=tensor([0.0194, 0.0227, 0.0162, 0.0146, 0.0213, 0.0188, 0.0216, 0.0193], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 03:38:34,563 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7286, 2.8232, 2.7022, 2.8674, 3.6674, 3.6671, 3.1040, 2.9595], device='cuda:3'), covar=tensor([0.0160, 0.0272, 0.0550, 0.0359, 0.0176, 0.0138, 0.0438, 0.0353], device='cuda:3'), in_proj_covar=tensor([0.0124, 0.0113, 0.0156, 0.0141, 0.0109, 0.0094, 0.0139, 0.0134], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 03:38:37,552 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.199e+02 3.182e+02 3.754e+02 4.508e+02 9.914e+02, threshold=7.507e+02, percent-clipped=4.0 2023-03-09 03:38:49,231 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=42562.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:38:54,986 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=42567.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:38:57,252 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3445, 5.8912, 5.3698, 5.6298, 5.3916, 5.3782, 5.8689, 5.8754], device='cuda:3'), covar=tensor([0.1019, 0.0584, 0.0577, 0.0694, 0.1406, 0.0682, 0.0583, 0.0671], device='cuda:3'), in_proj_covar=tensor([0.0532, 0.0436, 0.0339, 0.0470, 0.0651, 0.0479, 0.0623, 0.0467], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 03:39:04,474 INFO [train.py:898] (3/4) Epoch 12, batch 2600, loss[loss=0.1801, simple_loss=0.2642, pruned_loss=0.04795, over 18378.00 frames. ], tot_loss[loss=0.1814, simple_loss=0.267, pruned_loss=0.04788, over 3602397.11 frames. ], batch size: 50, lr: 9.40e-03, grad_scale: 8.0 2023-03-09 03:39:39,391 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=42604.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 03:39:46,072 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=42610.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:40:03,350 INFO [train.py:898] (3/4) Epoch 12, batch 2650, loss[loss=0.1827, simple_loss=0.2755, pruned_loss=0.04501, over 18503.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2683, pruned_loss=0.04841, over 3601842.40 frames. ], batch size: 51, lr: 9.40e-03, grad_scale: 8.0 2023-03-09 03:40:12,777 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.2347, 4.1707, 3.9103, 4.1661, 4.1853, 3.7610, 4.1531, 3.9684], device='cuda:3'), covar=tensor([0.0518, 0.0678, 0.1493, 0.0747, 0.0602, 0.0456, 0.0488, 0.1020], device='cuda:3'), in_proj_covar=tensor([0.0416, 0.0486, 0.0635, 0.0377, 0.0364, 0.0437, 0.0466, 0.0604], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 03:40:14,194 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.01 vs. limit=5.0 2023-03-09 03:40:31,649 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=42649.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:40:34,771 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.962e+02 3.110e+02 3.787e+02 4.423e+02 7.417e+02, threshold=7.574e+02, percent-clipped=0.0 2023-03-09 03:40:35,006 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=42652.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 03:41:00,854 INFO [train.py:898] (3/4) Epoch 12, batch 2700, loss[loss=0.2115, simple_loss=0.2945, pruned_loss=0.06424, over 18076.00 frames. ], tot_loss[loss=0.1835, simple_loss=0.2693, pruned_loss=0.04884, over 3592291.48 frames. ], batch size: 62, lr: 9.39e-03, grad_scale: 8.0 2023-03-09 03:41:42,273 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=42710.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:41:59,012 INFO [train.py:898] (3/4) Epoch 12, batch 2750, loss[loss=0.2033, simple_loss=0.2891, pruned_loss=0.05869, over 17059.00 frames. ], tot_loss[loss=0.1836, simple_loss=0.2694, pruned_loss=0.04888, over 3590996.08 frames. ], batch size: 78, lr: 9.39e-03, grad_scale: 8.0 2023-03-09 03:42:09,191 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5775, 5.1024, 5.1190, 5.0861, 4.5936, 4.9556, 4.3903, 5.0017], device='cuda:3'), covar=tensor([0.0225, 0.0268, 0.0170, 0.0347, 0.0354, 0.0224, 0.1091, 0.0267], device='cuda:3'), in_proj_covar=tensor([0.0180, 0.0227, 0.0213, 0.0260, 0.0227, 0.0229, 0.0285, 0.0217], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 03:42:11,371 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=42735.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:42:32,192 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.860e+02 3.123e+02 3.590e+02 4.311e+02 1.461e+03, threshold=7.180e+02, percent-clipped=1.0 2023-03-09 03:42:58,258 INFO [train.py:898] (3/4) Epoch 12, batch 2800, loss[loss=0.2058, simple_loss=0.2942, pruned_loss=0.05866, over 17859.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.2699, pruned_loss=0.04927, over 3583481.58 frames. ], batch size: 70, lr: 9.38e-03, grad_scale: 8.0 2023-03-09 03:43:57,204 INFO [train.py:898] (3/4) Epoch 12, batch 2850, loss[loss=0.1644, simple_loss=0.2401, pruned_loss=0.04431, over 18412.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.27, pruned_loss=0.04918, over 3589081.27 frames. ], batch size: 42, lr: 9.38e-03, grad_scale: 8.0 2023-03-09 03:44:19,640 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.69 vs. limit=2.0 2023-03-09 03:44:27,910 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.023e+02 3.014e+02 3.707e+02 4.689e+02 1.677e+03, threshold=7.413e+02, percent-clipped=4.0 2023-03-09 03:44:37,062 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=42859.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:44:40,345 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=42862.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:44:54,938 INFO [train.py:898] (3/4) Epoch 12, batch 2900, loss[loss=0.1992, simple_loss=0.289, pruned_loss=0.05475, over 18305.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.2698, pruned_loss=0.04926, over 3593599.38 frames. ], batch size: 54, lr: 9.37e-03, grad_scale: 8.0 2023-03-09 03:44:59,753 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=42879.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:45:04,021 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7315, 5.2299, 5.2482, 5.1703, 4.7615, 5.1056, 4.5411, 5.1158], device='cuda:3'), covar=tensor([0.0210, 0.0282, 0.0186, 0.0342, 0.0356, 0.0201, 0.1142, 0.0256], device='cuda:3'), in_proj_covar=tensor([0.0181, 0.0228, 0.0217, 0.0261, 0.0228, 0.0229, 0.0282, 0.0218], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 03:45:47,643 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=42920.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:45:53,048 INFO [train.py:898] (3/4) Epoch 12, batch 2950, loss[loss=0.1708, simple_loss=0.2552, pruned_loss=0.04319, over 18548.00 frames. ], tot_loss[loss=0.1849, simple_loss=0.2705, pruned_loss=0.04968, over 3579379.70 frames. ], batch size: 49, lr: 9.36e-03, grad_scale: 8.0 2023-03-09 03:46:11,089 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=42940.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:46:13,443 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3017, 4.3706, 2.5654, 4.3330, 5.4684, 3.0191, 3.7181, 3.7228], device='cuda:3'), covar=tensor([0.0100, 0.1158, 0.1537, 0.0546, 0.0053, 0.1033, 0.0751, 0.1090], device='cuda:3'), in_proj_covar=tensor([0.0124, 0.0238, 0.0193, 0.0190, 0.0093, 0.0176, 0.0206, 0.0208], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 03:46:24,426 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.040e+02 2.922e+02 3.740e+02 4.612e+02 1.225e+03, threshold=7.481e+02, percent-clipped=6.0 2023-03-09 03:46:37,891 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.6519, 5.5377, 5.1979, 5.5633, 5.5162, 4.8924, 5.4448, 5.1916], device='cuda:3'), covar=tensor([0.0341, 0.0399, 0.1199, 0.0640, 0.0528, 0.0413, 0.0364, 0.0967], device='cuda:3'), in_proj_covar=tensor([0.0416, 0.0486, 0.0633, 0.0377, 0.0364, 0.0438, 0.0464, 0.0601], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 03:46:41,512 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.29 vs. limit=5.0 2023-03-09 03:46:51,979 INFO [train.py:898] (3/4) Epoch 12, batch 3000, loss[loss=0.1863, simple_loss=0.2755, pruned_loss=0.04848, over 18477.00 frames. ], tot_loss[loss=0.1851, simple_loss=0.2704, pruned_loss=0.0499, over 3581573.81 frames. ], batch size: 53, lr: 9.36e-03, grad_scale: 8.0 2023-03-09 03:46:51,979 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 03:47:04,091 INFO [train.py:932] (3/4) Epoch 12, validation: loss=0.1557, simple_loss=0.2578, pruned_loss=0.02677, over 944034.00 frames. 2023-03-09 03:47:04,092 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 03:47:23,244 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5551, 3.4074, 3.3413, 3.0046, 3.2842, 2.7072, 2.7333, 3.5346], device='cuda:3'), covar=tensor([0.0048, 0.0085, 0.0065, 0.0115, 0.0085, 0.0159, 0.0158, 0.0056], device='cuda:3'), in_proj_covar=tensor([0.0103, 0.0125, 0.0109, 0.0157, 0.0110, 0.0155, 0.0160, 0.0091], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0001, 0.0002, 0.0001, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 03:47:40,314 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=43005.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:48:03,058 INFO [train.py:898] (3/4) Epoch 12, batch 3050, loss[loss=0.216, simple_loss=0.2953, pruned_loss=0.06833, over 12526.00 frames. ], tot_loss[loss=0.1843, simple_loss=0.2698, pruned_loss=0.04937, over 3580545.02 frames. ], batch size: 130, lr: 9.35e-03, grad_scale: 8.0 2023-03-09 03:48:12,659 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7848, 5.3030, 5.3025, 5.3451, 4.8669, 5.2348, 4.6052, 5.1760], device='cuda:3'), covar=tensor([0.0208, 0.0289, 0.0178, 0.0317, 0.0341, 0.0198, 0.1123, 0.0269], device='cuda:3'), in_proj_covar=tensor([0.0183, 0.0231, 0.0218, 0.0263, 0.0229, 0.0231, 0.0285, 0.0221], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 03:48:14,871 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=43035.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:48:35,553 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.270e+02 3.222e+02 3.719e+02 4.518e+02 9.955e+02, threshold=7.438e+02, percent-clipped=1.0 2023-03-09 03:49:01,445 INFO [train.py:898] (3/4) Epoch 12, batch 3100, loss[loss=0.2109, simple_loss=0.2922, pruned_loss=0.06478, over 17987.00 frames. ], tot_loss[loss=0.1839, simple_loss=0.2695, pruned_loss=0.04918, over 3583909.00 frames. ], batch size: 65, lr: 9.35e-03, grad_scale: 8.0 2023-03-09 03:49:11,122 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=43083.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:49:15,176 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.87 vs. limit=2.0 2023-03-09 03:49:15,808 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3758, 5.9684, 5.4550, 5.6601, 5.5561, 5.4286, 6.0246, 5.9859], device='cuda:3'), covar=tensor([0.1176, 0.0691, 0.0441, 0.0710, 0.1279, 0.0640, 0.0526, 0.0649], device='cuda:3'), in_proj_covar=tensor([0.0551, 0.0450, 0.0340, 0.0481, 0.0667, 0.0492, 0.0638, 0.0475], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 03:50:00,207 INFO [train.py:898] (3/4) Epoch 12, batch 3150, loss[loss=0.2128, simple_loss=0.2932, pruned_loss=0.0662, over 12857.00 frames. ], tot_loss[loss=0.1839, simple_loss=0.2694, pruned_loss=0.0492, over 3569814.02 frames. ], batch size: 131, lr: 9.34e-03, grad_scale: 8.0 2023-03-09 03:50:31,048 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.182e+02 3.274e+02 3.785e+02 4.716e+02 1.243e+03, threshold=7.571e+02, percent-clipped=5.0 2023-03-09 03:50:43,647 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=43162.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:50:58,062 INFO [train.py:898] (3/4) Epoch 12, batch 3200, loss[loss=0.1781, simple_loss=0.2604, pruned_loss=0.04794, over 18274.00 frames. ], tot_loss[loss=0.1837, simple_loss=0.2694, pruned_loss=0.04896, over 3573001.40 frames. ], batch size: 47, lr: 9.34e-03, grad_scale: 8.0 2023-03-09 03:51:12,523 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6181, 2.5873, 4.1852, 3.9513, 2.1205, 4.5087, 3.8298, 2.8636], device='cuda:3'), covar=tensor([0.0404, 0.1776, 0.0287, 0.0250, 0.2087, 0.0205, 0.0414, 0.1064], device='cuda:3'), in_proj_covar=tensor([0.0190, 0.0221, 0.0161, 0.0145, 0.0213, 0.0187, 0.0210, 0.0189], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 03:51:39,591 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=43210.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:51:45,928 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=43215.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:51:57,220 INFO [train.py:898] (3/4) Epoch 12, batch 3250, loss[loss=0.2108, simple_loss=0.3, pruned_loss=0.06077, over 18048.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.2696, pruned_loss=0.04935, over 3577708.06 frames. ], batch size: 62, lr: 9.33e-03, grad_scale: 8.0 2023-03-09 03:52:08,893 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=43235.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:52:19,907 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5775, 2.8439, 2.6035, 2.9266, 3.6618, 3.5016, 3.1806, 3.0482], device='cuda:3'), covar=tensor([0.0219, 0.0294, 0.0660, 0.0349, 0.0179, 0.0144, 0.0349, 0.0376], device='cuda:3'), in_proj_covar=tensor([0.0123, 0.0111, 0.0154, 0.0139, 0.0107, 0.0094, 0.0138, 0.0134], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 03:52:28,373 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.179e+02 3.099e+02 3.708e+02 4.167e+02 9.547e+02, threshold=7.415e+02, percent-clipped=3.0 2023-03-09 03:52:55,946 INFO [train.py:898] (3/4) Epoch 12, batch 3300, loss[loss=0.2103, simple_loss=0.2882, pruned_loss=0.06619, over 11967.00 frames. ], tot_loss[loss=0.1833, simple_loss=0.2688, pruned_loss=0.04891, over 3572223.90 frames. ], batch size: 129, lr: 9.33e-03, grad_scale: 8.0 2023-03-09 03:53:30,924 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=43305.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:53:54,817 INFO [train.py:898] (3/4) Epoch 12, batch 3350, loss[loss=0.2029, simple_loss=0.2877, pruned_loss=0.05907, over 17780.00 frames. ], tot_loss[loss=0.1834, simple_loss=0.2688, pruned_loss=0.04902, over 3578728.26 frames. ], batch size: 70, lr: 9.32e-03, grad_scale: 8.0 2023-03-09 03:54:25,614 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.041e+02 3.064e+02 3.610e+02 4.297e+02 1.025e+03, threshold=7.219e+02, percent-clipped=3.0 2023-03-09 03:54:26,959 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=43353.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:54:53,263 INFO [train.py:898] (3/4) Epoch 12, batch 3400, loss[loss=0.1718, simple_loss=0.2654, pruned_loss=0.03911, over 18553.00 frames. ], tot_loss[loss=0.1837, simple_loss=0.269, pruned_loss=0.04922, over 3565278.24 frames. ], batch size: 54, lr: 9.32e-03, grad_scale: 8.0 2023-03-09 03:55:51,947 INFO [train.py:898] (3/4) Epoch 12, batch 3450, loss[loss=0.1984, simple_loss=0.2839, pruned_loss=0.05644, over 17146.00 frames. ], tot_loss[loss=0.1836, simple_loss=0.2687, pruned_loss=0.04922, over 3571154.95 frames. ], batch size: 78, lr: 9.31e-03, grad_scale: 8.0 2023-03-09 03:56:23,234 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.156e+02 3.064e+02 3.567e+02 4.230e+02 7.375e+02, threshold=7.135e+02, percent-clipped=1.0 2023-03-09 03:56:31,487 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1964, 5.1698, 5.3973, 5.3743, 5.2097, 5.8695, 5.6365, 5.1483], device='cuda:3'), covar=tensor([0.1093, 0.0723, 0.0647, 0.0580, 0.1588, 0.0777, 0.0575, 0.2007], device='cuda:3'), in_proj_covar=tensor([0.0316, 0.0243, 0.0259, 0.0258, 0.0300, 0.0368, 0.0246, 0.0359], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0004, 0.0002, 0.0003], device='cuda:3') 2023-03-09 03:56:51,194 INFO [train.py:898] (3/4) Epoch 12, batch 3500, loss[loss=0.1863, simple_loss=0.2736, pruned_loss=0.04952, over 18109.00 frames. ], tot_loss[loss=0.1827, simple_loss=0.2681, pruned_loss=0.04861, over 3579800.33 frames. ], batch size: 62, lr: 9.31e-03, grad_scale: 8.0 2023-03-09 03:57:24,785 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8069, 5.3366, 5.3853, 5.3860, 4.9000, 5.2250, 4.6158, 5.2051], device='cuda:3'), covar=tensor([0.0223, 0.0300, 0.0166, 0.0284, 0.0299, 0.0204, 0.1088, 0.0270], device='cuda:3'), in_proj_covar=tensor([0.0180, 0.0228, 0.0218, 0.0259, 0.0228, 0.0231, 0.0285, 0.0221], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 03:57:35,028 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=43514.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:57:36,050 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=43515.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:57:44,576 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3152, 4.3081, 2.4005, 4.2596, 5.3877, 2.7236, 3.8630, 4.1183], device='cuda:3'), covar=tensor([0.0072, 0.1025, 0.1693, 0.0545, 0.0044, 0.1261, 0.0683, 0.0642], device='cuda:3'), in_proj_covar=tensor([0.0123, 0.0235, 0.0191, 0.0188, 0.0093, 0.0174, 0.0203, 0.0207], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 03:57:46,260 INFO [train.py:898] (3/4) Epoch 12, batch 3550, loss[loss=0.2085, simple_loss=0.2877, pruned_loss=0.06462, over 18232.00 frames. ], tot_loss[loss=0.1826, simple_loss=0.2679, pruned_loss=0.04866, over 3589700.00 frames. ], batch size: 60, lr: 9.30e-03, grad_scale: 8.0 2023-03-09 03:57:57,047 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=43535.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:58:15,320 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.095e+02 3.008e+02 3.651e+02 4.331e+02 1.115e+03, threshold=7.301e+02, percent-clipped=5.0 2023-03-09 03:58:27,331 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=43563.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:58:28,670 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3598, 3.3897, 3.2388, 2.8895, 3.2075, 2.5410, 2.6067, 3.4597], device='cuda:3'), covar=tensor([0.0045, 0.0067, 0.0069, 0.0119, 0.0083, 0.0166, 0.0177, 0.0059], device='cuda:3'), in_proj_covar=tensor([0.0103, 0.0125, 0.0109, 0.0159, 0.0112, 0.0156, 0.0160, 0.0091], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 03:58:40,414 INFO [train.py:898] (3/4) Epoch 12, batch 3600, loss[loss=0.1888, simple_loss=0.2836, pruned_loss=0.04698, over 18283.00 frames. ], tot_loss[loss=0.1833, simple_loss=0.2685, pruned_loss=0.04908, over 3578478.17 frames. ], batch size: 57, lr: 9.30e-03, grad_scale: 8.0 2023-03-09 03:58:40,800 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=43575.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:58:49,373 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=43583.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 03:59:09,418 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=2.02 vs. limit=2.0 2023-03-09 03:59:46,562 INFO [train.py:898] (3/4) Epoch 13, batch 0, loss[loss=0.1734, simple_loss=0.2596, pruned_loss=0.04358, over 18487.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2596, pruned_loss=0.04358, over 18487.00 frames. ], batch size: 51, lr: 8.93e-03, grad_scale: 8.0 2023-03-09 03:59:46,562 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 03:59:58,387 INFO [train.py:932] (3/4) Epoch 13, validation: loss=0.1568, simple_loss=0.2587, pruned_loss=0.02742, over 944034.00 frames. 2023-03-09 03:59:58,387 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 04:00:49,454 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.977e+02 3.410e+02 4.106e+02 5.043e+02 1.786e+03, threshold=8.212e+02, percent-clipped=7.0 2023-03-09 04:00:57,357 INFO [train.py:898] (3/4) Epoch 13, batch 50, loss[loss=0.1704, simple_loss=0.2567, pruned_loss=0.04206, over 18507.00 frames. ], tot_loss[loss=0.1857, simple_loss=0.2717, pruned_loss=0.0498, over 816087.79 frames. ], batch size: 47, lr: 8.92e-03, grad_scale: 8.0 2023-03-09 04:01:04,878 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=2.02 vs. limit=2.0 2023-03-09 04:01:11,844 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4453, 5.4361, 4.8560, 5.3325, 5.3696, 4.8222, 5.2668, 4.9199], device='cuda:3'), covar=tensor([0.0661, 0.0565, 0.1896, 0.1073, 0.0699, 0.0544, 0.0602, 0.1252], device='cuda:3'), in_proj_covar=tensor([0.0415, 0.0481, 0.0632, 0.0377, 0.0366, 0.0434, 0.0465, 0.0600], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 04:01:56,183 INFO [train.py:898] (3/4) Epoch 13, batch 100, loss[loss=0.1973, simple_loss=0.2827, pruned_loss=0.05592, over 18398.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.2703, pruned_loss=0.04906, over 1445807.35 frames. ], batch size: 50, lr: 8.92e-03, grad_scale: 8.0 2023-03-09 04:02:37,087 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-09 04:02:46,375 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0781, 5.1436, 5.2336, 5.0136, 4.8702, 5.0191, 5.4151, 5.3135], device='cuda:3'), covar=tensor([0.0075, 0.0072, 0.0059, 0.0093, 0.0067, 0.0100, 0.0074, 0.0091], device='cuda:3'), in_proj_covar=tensor([0.0082, 0.0058, 0.0061, 0.0078, 0.0065, 0.0089, 0.0075, 0.0075], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 04:02:47,116 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.935e+02 2.941e+02 3.301e+02 3.834e+02 7.601e+02, threshold=6.602e+02, percent-clipped=0.0 2023-03-09 04:02:55,218 INFO [train.py:898] (3/4) Epoch 13, batch 150, loss[loss=0.1952, simple_loss=0.2731, pruned_loss=0.0586, over 18619.00 frames. ], tot_loss[loss=0.1837, simple_loss=0.2699, pruned_loss=0.04877, over 1917992.20 frames. ], batch size: 52, lr: 8.91e-03, grad_scale: 8.0 2023-03-09 04:03:28,207 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=43787.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:03:53,890 INFO [train.py:898] (3/4) Epoch 13, batch 200, loss[loss=0.1724, simple_loss=0.2612, pruned_loss=0.04175, over 18270.00 frames. ], tot_loss[loss=0.1838, simple_loss=0.2702, pruned_loss=0.04863, over 2299328.88 frames. ], batch size: 49, lr: 8.91e-03, grad_scale: 8.0 2023-03-09 04:04:40,716 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=43848.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:04:44,845 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.130e+02 3.045e+02 3.524e+02 4.456e+02 7.991e+02, threshold=7.049e+02, percent-clipped=5.0 2023-03-09 04:04:53,609 INFO [train.py:898] (3/4) Epoch 13, batch 250, loss[loss=0.1929, simple_loss=0.2799, pruned_loss=0.05298, over 18557.00 frames. ], tot_loss[loss=0.1837, simple_loss=0.2701, pruned_loss=0.04867, over 2577917.45 frames. ], batch size: 54, lr: 8.90e-03, grad_scale: 8.0 2023-03-09 04:05:04,699 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.68 vs. limit=2.0 2023-03-09 04:05:06,502 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=43870.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:05:38,364 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9795, 3.9515, 5.2806, 4.6760, 3.4920, 3.2900, 4.8850, 5.5218], device='cuda:3'), covar=tensor([0.0702, 0.1379, 0.0122, 0.0272, 0.0775, 0.0953, 0.0258, 0.0157], device='cuda:3'), in_proj_covar=tensor([0.0138, 0.0249, 0.0112, 0.0164, 0.0180, 0.0177, 0.0177, 0.0156], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 04:05:52,783 INFO [train.py:898] (3/4) Epoch 13, batch 300, loss[loss=0.1969, simple_loss=0.2859, pruned_loss=0.05394, over 18291.00 frames. ], tot_loss[loss=0.1849, simple_loss=0.2713, pruned_loss=0.04925, over 2800968.49 frames. ], batch size: 54, lr: 8.90e-03, grad_scale: 8.0 2023-03-09 04:05:53,184 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=43909.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:05:55,802 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-09 04:06:43,697 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.829e+02 2.907e+02 3.635e+02 4.242e+02 7.161e+02, threshold=7.270e+02, percent-clipped=1.0 2023-03-09 04:06:52,913 INFO [train.py:898] (3/4) Epoch 13, batch 350, loss[loss=0.168, simple_loss=0.2589, pruned_loss=0.03861, over 18497.00 frames. ], tot_loss[loss=0.1843, simple_loss=0.2703, pruned_loss=0.04914, over 2966478.73 frames. ], batch size: 51, lr: 8.89e-03, grad_scale: 8.0 2023-03-09 04:06:53,334 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=43959.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:07:06,110 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=43970.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:07:50,877 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-09 04:07:56,732 INFO [train.py:898] (3/4) Epoch 13, batch 400, loss[loss=0.1578, simple_loss=0.2351, pruned_loss=0.0402, over 18420.00 frames. ], tot_loss[loss=0.1827, simple_loss=0.2685, pruned_loss=0.04845, over 3113863.60 frames. ], batch size: 43, lr: 8.89e-03, grad_scale: 8.0 2023-03-09 04:08:10,365 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=44020.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:08:26,647 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5620, 2.1297, 2.5801, 2.5731, 3.2522, 5.0122, 4.5527, 3.8129], device='cuda:3'), covar=tensor([0.1409, 0.2204, 0.2556, 0.1516, 0.1973, 0.0128, 0.0381, 0.0567], device='cuda:3'), in_proj_covar=tensor([0.0254, 0.0312, 0.0332, 0.0253, 0.0369, 0.0197, 0.0268, 0.0214], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 04:08:42,057 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.86 vs. limit=5.0 2023-03-09 04:08:47,392 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.657e+02 2.874e+02 3.424e+02 4.227e+02 8.593e+02, threshold=6.849e+02, percent-clipped=4.0 2023-03-09 04:08:55,692 INFO [train.py:898] (3/4) Epoch 13, batch 450, loss[loss=0.2263, simple_loss=0.3028, pruned_loss=0.07491, over 16137.00 frames. ], tot_loss[loss=0.1831, simple_loss=0.269, pruned_loss=0.04855, over 3211609.10 frames. ], batch size: 95, lr: 8.88e-03, grad_scale: 16.0 2023-03-09 04:09:54,524 INFO [train.py:898] (3/4) Epoch 13, batch 500, loss[loss=0.2201, simple_loss=0.3092, pruned_loss=0.06549, over 18289.00 frames. ], tot_loss[loss=0.183, simple_loss=0.2692, pruned_loss=0.04842, over 3295814.82 frames. ], batch size: 57, lr: 8.88e-03, grad_scale: 16.0 2023-03-09 04:10:34,486 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=44143.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:10:44,861 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.103e+02 3.159e+02 3.633e+02 4.577e+02 9.760e+02, threshold=7.265e+02, percent-clipped=1.0 2023-03-09 04:10:53,410 INFO [train.py:898] (3/4) Epoch 13, batch 550, loss[loss=0.164, simple_loss=0.2417, pruned_loss=0.04312, over 18585.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.2681, pruned_loss=0.04808, over 3371933.71 frames. ], batch size: 45, lr: 8.87e-03, grad_scale: 16.0 2023-03-09 04:11:07,449 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=44170.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:11:13,363 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5591, 5.5326, 5.0513, 5.4953, 5.4064, 4.7738, 5.3549, 5.0464], device='cuda:3'), covar=tensor([0.0422, 0.0381, 0.1479, 0.0727, 0.0557, 0.0432, 0.0434, 0.0970], device='cuda:3'), in_proj_covar=tensor([0.0416, 0.0486, 0.0638, 0.0377, 0.0372, 0.0439, 0.0470, 0.0607], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 04:11:42,400 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8834, 2.3628, 2.1943, 2.5000, 2.9881, 2.9435, 2.6975, 2.6455], device='cuda:3'), covar=tensor([0.0220, 0.0282, 0.0539, 0.0381, 0.0201, 0.0170, 0.0341, 0.0288], device='cuda:3'), in_proj_covar=tensor([0.0124, 0.0113, 0.0154, 0.0142, 0.0109, 0.0095, 0.0138, 0.0135], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 04:11:53,539 INFO [train.py:898] (3/4) Epoch 13, batch 600, loss[loss=0.1849, simple_loss=0.2787, pruned_loss=0.0456, over 18383.00 frames. ], tot_loss[loss=0.1814, simple_loss=0.267, pruned_loss=0.04791, over 3420050.67 frames. ], batch size: 52, lr: 8.87e-03, grad_scale: 16.0 2023-03-09 04:11:58,361 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0045, 3.3612, 3.4615, 2.8304, 2.9865, 2.8004, 2.4089, 2.2432], device='cuda:3'), covar=tensor([0.0262, 0.0148, 0.0102, 0.0283, 0.0371, 0.0241, 0.0588, 0.0708], device='cuda:3'), in_proj_covar=tensor([0.0060, 0.0047, 0.0049, 0.0060, 0.0081, 0.0058, 0.0070, 0.0078], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0006, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 04:12:04,402 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=44218.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:12:05,645 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5233, 3.3298, 2.0404, 4.2689, 2.9915, 4.3141, 2.3281, 3.7690], device='cuda:3'), covar=tensor([0.0540, 0.0794, 0.1461, 0.0516, 0.0839, 0.0322, 0.1147, 0.0412], device='cuda:3'), in_proj_covar=tensor([0.0198, 0.0213, 0.0181, 0.0248, 0.0179, 0.0247, 0.0191, 0.0190], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 04:12:42,716 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.069e+02 3.135e+02 3.711e+02 4.524e+02 8.017e+02, threshold=7.422e+02, percent-clipped=1.0 2023-03-09 04:12:51,314 INFO [train.py:898] (3/4) Epoch 13, batch 650, loss[loss=0.1822, simple_loss=0.2746, pruned_loss=0.04494, over 16067.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.2677, pruned_loss=0.04823, over 3459938.88 frames. ], batch size: 94, lr: 8.86e-03, grad_scale: 16.0 2023-03-09 04:12:58,909 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=44265.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:13:08,915 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-09 04:13:27,416 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9635, 4.0659, 2.4009, 3.9921, 4.9740, 2.5055, 3.6689, 3.8534], device='cuda:3'), covar=tensor([0.0086, 0.1006, 0.1616, 0.0527, 0.0055, 0.1228, 0.0682, 0.0714], device='cuda:3'), in_proj_covar=tensor([0.0125, 0.0239, 0.0193, 0.0191, 0.0094, 0.0174, 0.0204, 0.0209], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 04:13:49,368 INFO [train.py:898] (3/4) Epoch 13, batch 700, loss[loss=0.1816, simple_loss=0.2669, pruned_loss=0.04814, over 18392.00 frames. ], tot_loss[loss=0.1822, simple_loss=0.268, pruned_loss=0.04821, over 3487293.26 frames. ], batch size: 50, lr: 8.86e-03, grad_scale: 8.0 2023-03-09 04:13:57,545 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=44315.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:14:41,972 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.372e+02 3.099e+02 3.698e+02 4.790e+02 1.213e+03, threshold=7.395e+02, percent-clipped=5.0 2023-03-09 04:14:48,751 INFO [train.py:898] (3/4) Epoch 13, batch 750, loss[loss=0.1981, simple_loss=0.2881, pruned_loss=0.05402, over 18018.00 frames. ], tot_loss[loss=0.1828, simple_loss=0.2686, pruned_loss=0.04854, over 3487977.92 frames. ], batch size: 65, lr: 8.85e-03, grad_scale: 8.0 2023-03-09 04:15:23,425 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6907, 5.2305, 5.2220, 5.1784, 4.7513, 5.0653, 4.5361, 5.0797], device='cuda:3'), covar=tensor([0.0222, 0.0265, 0.0186, 0.0364, 0.0327, 0.0226, 0.0968, 0.0281], device='cuda:3'), in_proj_covar=tensor([0.0186, 0.0232, 0.0221, 0.0264, 0.0232, 0.0233, 0.0288, 0.0224], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 04:15:48,579 INFO [train.py:898] (3/4) Epoch 13, batch 800, loss[loss=0.2139, simple_loss=0.3004, pruned_loss=0.06372, over 18369.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2681, pruned_loss=0.04846, over 3525356.89 frames. ], batch size: 56, lr: 8.85e-03, grad_scale: 8.0 2023-03-09 04:16:06,072 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7542, 4.1696, 2.7687, 3.9185, 3.9949, 4.1683, 4.0446, 2.6919], device='cuda:3'), covar=tensor([0.0154, 0.0071, 0.0642, 0.0194, 0.0083, 0.0071, 0.0100, 0.0869], device='cuda:3'), in_proj_covar=tensor([0.0078, 0.0070, 0.0089, 0.0085, 0.0077, 0.0066, 0.0078, 0.0091], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 04:16:29,589 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=44443.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:16:40,511 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.201e+02 3.366e+02 4.098e+02 5.276e+02 1.273e+03, threshold=8.196e+02, percent-clipped=10.0 2023-03-09 04:16:47,520 INFO [train.py:898] (3/4) Epoch 13, batch 850, loss[loss=0.1601, simple_loss=0.2435, pruned_loss=0.03833, over 18557.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2683, pruned_loss=0.04838, over 3539122.81 frames. ], batch size: 45, lr: 8.84e-03, grad_scale: 8.0 2023-03-09 04:17:26,165 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=44491.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:17:35,753 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.51 vs. limit=2.0 2023-03-09 04:17:45,186 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0207, 5.2203, 2.8060, 4.9955, 4.8555, 5.1949, 4.9698, 2.5421], device='cuda:3'), covar=tensor([0.0171, 0.0054, 0.0699, 0.0089, 0.0076, 0.0063, 0.0099, 0.0988], device='cuda:3'), in_proj_covar=tensor([0.0078, 0.0070, 0.0089, 0.0085, 0.0078, 0.0067, 0.0078, 0.0091], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 04:17:45,253 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2200, 2.5786, 2.3202, 2.6384, 3.3398, 3.2734, 2.8398, 2.7132], device='cuda:3'), covar=tensor([0.0274, 0.0295, 0.0636, 0.0432, 0.0233, 0.0187, 0.0419, 0.0393], device='cuda:3'), in_proj_covar=tensor([0.0123, 0.0113, 0.0154, 0.0143, 0.0110, 0.0094, 0.0139, 0.0135], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 04:17:45,918 INFO [train.py:898] (3/4) Epoch 13, batch 900, loss[loss=0.1879, simple_loss=0.2789, pruned_loss=0.04845, over 18215.00 frames. ], tot_loss[loss=0.1829, simple_loss=0.2689, pruned_loss=0.0485, over 3547223.46 frames. ], batch size: 60, lr: 8.84e-03, grad_scale: 8.0 2023-03-09 04:17:50,246 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-09 04:18:07,282 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=44527.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:18:34,304 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=44550.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:18:37,464 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.936e+02 2.996e+02 3.521e+02 4.536e+02 7.966e+02, threshold=7.042e+02, percent-clipped=0.0 2023-03-09 04:18:44,371 INFO [train.py:898] (3/4) Epoch 13, batch 950, loss[loss=0.1907, simple_loss=0.2804, pruned_loss=0.05049, over 16936.00 frames. ], tot_loss[loss=0.1827, simple_loss=0.2687, pruned_loss=0.04834, over 3563625.13 frames. ], batch size: 78, lr: 8.84e-03, grad_scale: 8.0 2023-03-09 04:18:51,483 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=44565.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:19:18,521 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=44588.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:19:27,687 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7503, 2.8762, 4.2015, 3.8276, 2.6283, 4.5013, 3.9780, 2.8563], device='cuda:3'), covar=tensor([0.0385, 0.1401, 0.0208, 0.0305, 0.1615, 0.0173, 0.0418, 0.0928], device='cuda:3'), in_proj_covar=tensor([0.0192, 0.0224, 0.0164, 0.0145, 0.0214, 0.0188, 0.0211, 0.0192], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 04:19:43,367 INFO [train.py:898] (3/4) Epoch 13, batch 1000, loss[loss=0.1782, simple_loss=0.2659, pruned_loss=0.04527, over 18388.00 frames. ], tot_loss[loss=0.1832, simple_loss=0.2694, pruned_loss=0.04848, over 3556212.11 frames. ], batch size: 50, lr: 8.83e-03, grad_scale: 8.0 2023-03-09 04:19:46,087 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=44611.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:19:48,237 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=44613.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:19:50,675 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=44615.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:20:36,781 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.889e+02 3.069e+02 3.543e+02 4.304e+02 1.212e+03, threshold=7.086e+02, percent-clipped=7.0 2023-03-09 04:20:42,640 INFO [train.py:898] (3/4) Epoch 13, batch 1050, loss[loss=0.1714, simple_loss=0.2592, pruned_loss=0.04183, over 18533.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.2684, pruned_loss=0.04793, over 3562700.53 frames. ], batch size: 49, lr: 8.83e-03, grad_scale: 4.0 2023-03-09 04:20:46,384 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6704, 5.3331, 5.3690, 5.4046, 4.8893, 5.2603, 4.2205, 5.2667], device='cuda:3'), covar=tensor([0.0328, 0.0399, 0.0279, 0.0359, 0.0358, 0.0274, 0.1881, 0.0296], device='cuda:3'), in_proj_covar=tensor([0.0188, 0.0234, 0.0223, 0.0268, 0.0236, 0.0234, 0.0294, 0.0226], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 04:20:47,378 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=44663.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:21:41,959 INFO [train.py:898] (3/4) Epoch 13, batch 1100, loss[loss=0.2194, simple_loss=0.3011, pruned_loss=0.06888, over 18406.00 frames. ], tot_loss[loss=0.1828, simple_loss=0.2691, pruned_loss=0.04822, over 3572160.24 frames. ], batch size: 52, lr: 8.82e-03, grad_scale: 4.0 2023-03-09 04:22:35,541 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.712e+02 2.982e+02 3.567e+02 4.035e+02 7.842e+02, threshold=7.134e+02, percent-clipped=1.0 2023-03-09 04:22:41,043 INFO [train.py:898] (3/4) Epoch 13, batch 1150, loss[loss=0.1689, simple_loss=0.2595, pruned_loss=0.03913, over 18402.00 frames. ], tot_loss[loss=0.1826, simple_loss=0.2687, pruned_loss=0.04819, over 3576649.97 frames. ], batch size: 48, lr: 8.82e-03, grad_scale: 4.0 2023-03-09 04:22:44,790 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6641, 5.2254, 5.2149, 5.2102, 4.7165, 5.0540, 4.5224, 5.0985], device='cuda:3'), covar=tensor([0.0247, 0.0278, 0.0205, 0.0367, 0.0385, 0.0271, 0.1139, 0.0287], device='cuda:3'), in_proj_covar=tensor([0.0187, 0.0234, 0.0221, 0.0266, 0.0235, 0.0234, 0.0293, 0.0225], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 04:23:40,894 INFO [train.py:898] (3/4) Epoch 13, batch 1200, loss[loss=0.193, simple_loss=0.2813, pruned_loss=0.05231, over 18328.00 frames. ], tot_loss[loss=0.1823, simple_loss=0.2684, pruned_loss=0.04811, over 3576378.64 frames. ], batch size: 54, lr: 8.81e-03, grad_scale: 8.0 2023-03-09 04:23:53,018 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5761, 3.3507, 2.2936, 4.3249, 2.9550, 4.3019, 2.2854, 4.0187], device='cuda:3'), covar=tensor([0.0591, 0.0893, 0.1324, 0.0447, 0.0814, 0.0317, 0.1312, 0.0321], device='cuda:3'), in_proj_covar=tensor([0.0201, 0.0216, 0.0183, 0.0252, 0.0183, 0.0249, 0.0194, 0.0191], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 04:24:30,553 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-09 04:24:33,791 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.745e+02 2.967e+02 3.663e+02 4.375e+02 1.162e+03, threshold=7.327e+02, percent-clipped=1.0 2023-03-09 04:24:40,012 INFO [train.py:898] (3/4) Epoch 13, batch 1250, loss[loss=0.1494, simple_loss=0.234, pruned_loss=0.03242, over 18522.00 frames. ], tot_loss[loss=0.1819, simple_loss=0.268, pruned_loss=0.04791, over 3575938.16 frames. ], batch size: 44, lr: 8.81e-03, grad_scale: 8.0 2023-03-09 04:25:02,975 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.21 vs. limit=5.0 2023-03-09 04:25:07,224 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=44883.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:25:14,729 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-09 04:25:35,381 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=44906.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:25:38,456 INFO [train.py:898] (3/4) Epoch 13, batch 1300, loss[loss=0.1614, simple_loss=0.234, pruned_loss=0.04445, over 18462.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.2685, pruned_loss=0.04791, over 3585006.62 frames. ], batch size: 43, lr: 8.80e-03, grad_scale: 8.0 2023-03-09 04:26:22,142 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.85 vs. limit=2.0 2023-03-09 04:26:30,491 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0287, 3.4611, 2.7000, 3.4472, 4.1049, 2.5268, 3.3686, 3.4456], device='cuda:3'), covar=tensor([0.0150, 0.0923, 0.1185, 0.0545, 0.0084, 0.1077, 0.0631, 0.0642], device='cuda:3'), in_proj_covar=tensor([0.0127, 0.0241, 0.0194, 0.0192, 0.0095, 0.0176, 0.0204, 0.0209], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 04:26:31,151 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.969e+02 2.987e+02 3.642e+02 4.868e+02 9.952e+02, threshold=7.283e+02, percent-clipped=3.0 2023-03-09 04:26:36,909 INFO [train.py:898] (3/4) Epoch 13, batch 1350, loss[loss=0.1748, simple_loss=0.2644, pruned_loss=0.04265, over 18400.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.2678, pruned_loss=0.04783, over 3574640.24 frames. ], batch size: 52, lr: 8.80e-03, grad_scale: 8.0 2023-03-09 04:26:40,037 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6922, 3.6361, 3.4364, 3.0984, 3.4382, 2.7723, 2.7136, 3.7279], device='cuda:3'), covar=tensor([0.0042, 0.0075, 0.0073, 0.0118, 0.0080, 0.0155, 0.0169, 0.0043], device='cuda:3'), in_proj_covar=tensor([0.0105, 0.0128, 0.0111, 0.0161, 0.0113, 0.0157, 0.0163, 0.0095], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 04:27:11,397 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-09 04:27:35,997 INFO [train.py:898] (3/4) Epoch 13, batch 1400, loss[loss=0.1965, simple_loss=0.2818, pruned_loss=0.05562, over 18235.00 frames. ], tot_loss[loss=0.1816, simple_loss=0.2679, pruned_loss=0.04759, over 3590634.78 frames. ], batch size: 60, lr: 8.79e-03, grad_scale: 8.0 2023-03-09 04:28:28,829 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.199e+02 3.129e+02 3.917e+02 4.727e+02 1.364e+03, threshold=7.834e+02, percent-clipped=4.0 2023-03-09 04:28:32,230 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=45056.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:28:35,350 INFO [train.py:898] (3/4) Epoch 13, batch 1450, loss[loss=0.2002, simple_loss=0.2903, pruned_loss=0.05506, over 18283.00 frames. ], tot_loss[loss=0.1816, simple_loss=0.2679, pruned_loss=0.04765, over 3605141.13 frames. ], batch size: 57, lr: 8.79e-03, grad_scale: 8.0 2023-03-09 04:29:01,656 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=45081.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:29:34,108 INFO [train.py:898] (3/4) Epoch 13, batch 1500, loss[loss=0.1789, simple_loss=0.261, pruned_loss=0.04845, over 18386.00 frames. ], tot_loss[loss=0.1826, simple_loss=0.2692, pruned_loss=0.04805, over 3594336.88 frames. ], batch size: 46, lr: 8.78e-03, grad_scale: 8.0 2023-03-09 04:29:44,344 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=45117.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 04:29:50,468 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7146, 3.0632, 4.1739, 3.8282, 2.8884, 4.6143, 3.9903, 3.0458], device='cuda:3'), covar=tensor([0.0438, 0.1289, 0.0205, 0.0298, 0.1328, 0.0161, 0.0401, 0.0894], device='cuda:3'), in_proj_covar=tensor([0.0196, 0.0226, 0.0166, 0.0146, 0.0216, 0.0194, 0.0216, 0.0194], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 04:30:13,275 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=45142.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:30:26,572 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.379e+02 3.129e+02 3.724e+02 4.200e+02 9.663e+02, threshold=7.448e+02, percent-clipped=2.0 2023-03-09 04:30:33,389 INFO [train.py:898] (3/4) Epoch 13, batch 1550, loss[loss=0.1849, simple_loss=0.2785, pruned_loss=0.0457, over 18394.00 frames. ], tot_loss[loss=0.1827, simple_loss=0.2692, pruned_loss=0.04805, over 3596261.56 frames. ], batch size: 52, lr: 8.78e-03, grad_scale: 8.0 2023-03-09 04:31:01,935 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=45183.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:31:28,184 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=45206.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:31:32,575 INFO [train.py:898] (3/4) Epoch 13, batch 1600, loss[loss=0.1732, simple_loss=0.2558, pruned_loss=0.04528, over 18511.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2689, pruned_loss=0.04803, over 3601269.63 frames. ], batch size: 47, lr: 8.77e-03, grad_scale: 8.0 2023-03-09 04:31:51,301 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4199, 5.3980, 5.6542, 5.6009, 5.3798, 6.2127, 5.8587, 5.6349], device='cuda:3'), covar=tensor([0.1142, 0.0652, 0.0681, 0.0619, 0.1493, 0.0863, 0.0579, 0.1593], device='cuda:3'), in_proj_covar=tensor([0.0322, 0.0252, 0.0266, 0.0265, 0.0306, 0.0379, 0.0248, 0.0365], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0004, 0.0002, 0.0003], device='cuda:3') 2023-03-09 04:31:58,522 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=45231.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:32:24,701 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.718e+02 3.180e+02 3.814e+02 4.660e+02 1.002e+03, threshold=7.628e+02, percent-clipped=5.0 2023-03-09 04:32:24,932 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=45254.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:32:30,336 INFO [train.py:898] (3/4) Epoch 13, batch 1650, loss[loss=0.1888, simple_loss=0.2726, pruned_loss=0.05255, over 18400.00 frames. ], tot_loss[loss=0.183, simple_loss=0.2692, pruned_loss=0.04836, over 3598511.31 frames. ], batch size: 52, lr: 8.77e-03, grad_scale: 8.0 2023-03-09 04:33:09,081 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5330, 3.6380, 5.1072, 4.4216, 3.3034, 2.8460, 4.3513, 5.3150], device='cuda:3'), covar=tensor([0.0860, 0.1720, 0.0124, 0.0327, 0.0923, 0.1179, 0.0364, 0.0153], device='cuda:3'), in_proj_covar=tensor([0.0138, 0.0251, 0.0113, 0.0165, 0.0182, 0.0177, 0.0178, 0.0160], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 04:33:20,584 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4611, 3.8281, 5.1835, 4.2569, 2.9414, 2.6225, 4.2994, 5.4036], device='cuda:3'), covar=tensor([0.0849, 0.1605, 0.0126, 0.0376, 0.0977, 0.1212, 0.0390, 0.0117], device='cuda:3'), in_proj_covar=tensor([0.0138, 0.0251, 0.0113, 0.0165, 0.0182, 0.0177, 0.0178, 0.0160], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 04:33:29,068 INFO [train.py:898] (3/4) Epoch 13, batch 1700, loss[loss=0.1818, simple_loss=0.2716, pruned_loss=0.04601, over 16054.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2683, pruned_loss=0.04764, over 3605029.79 frames. ], batch size: 94, lr: 8.76e-03, grad_scale: 8.0 2023-03-09 04:33:35,429 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.16 vs. limit=5.0 2023-03-09 04:34:15,812 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=45348.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 04:34:22,328 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.903e+02 2.834e+02 3.670e+02 4.509e+02 1.027e+03, threshold=7.340e+02, percent-clipped=3.0 2023-03-09 04:34:28,088 INFO [train.py:898] (3/4) Epoch 13, batch 1750, loss[loss=0.1577, simple_loss=0.2336, pruned_loss=0.04093, over 18486.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.2685, pruned_loss=0.04785, over 3593509.40 frames. ], batch size: 44, lr: 8.76e-03, grad_scale: 8.0 2023-03-09 04:35:15,624 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.10 vs. limit=5.0 2023-03-09 04:35:27,720 INFO [train.py:898] (3/4) Epoch 13, batch 1800, loss[loss=0.1434, simple_loss=0.2269, pruned_loss=0.02999, over 18416.00 frames. ], tot_loss[loss=0.1814, simple_loss=0.2678, pruned_loss=0.04751, over 3604662.59 frames. ], batch size: 43, lr: 8.75e-03, grad_scale: 8.0 2023-03-09 04:35:28,171 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=45409.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 04:35:31,486 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=45412.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 04:36:01,119 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=45437.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:36:21,057 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.197e+02 3.089e+02 3.615e+02 4.391e+02 8.521e+02, threshold=7.230e+02, percent-clipped=5.0 2023-03-09 04:36:26,728 INFO [train.py:898] (3/4) Epoch 13, batch 1850, loss[loss=0.2055, simple_loss=0.2921, pruned_loss=0.05945, over 18498.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2679, pruned_loss=0.04778, over 3595786.06 frames. ], batch size: 53, lr: 8.75e-03, grad_scale: 8.0 2023-03-09 04:36:32,792 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6803, 4.1655, 2.6842, 3.9613, 3.9969, 4.1781, 4.0205, 2.6800], device='cuda:3'), covar=tensor([0.0187, 0.0088, 0.0728, 0.0238, 0.0100, 0.0075, 0.0113, 0.0922], device='cuda:3'), in_proj_covar=tensor([0.0079, 0.0071, 0.0090, 0.0086, 0.0079, 0.0067, 0.0078, 0.0092], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 04:36:37,254 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.3459, 4.3602, 4.3552, 4.2422, 4.1671, 4.2538, 4.5326, 4.5248], device='cuda:3'), covar=tensor([0.0076, 0.0070, 0.0074, 0.0100, 0.0076, 0.0134, 0.0061, 0.0088], device='cuda:3'), in_proj_covar=tensor([0.0083, 0.0059, 0.0062, 0.0079, 0.0066, 0.0090, 0.0075, 0.0076], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 04:36:43,114 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.56 vs. limit=2.0 2023-03-09 04:36:57,589 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4648, 6.1223, 5.4972, 5.8748, 5.6422, 5.5326, 6.1203, 6.0974], device='cuda:3'), covar=tensor([0.1213, 0.0584, 0.0405, 0.0631, 0.1460, 0.0682, 0.0562, 0.0585], device='cuda:3'), in_proj_covar=tensor([0.0541, 0.0454, 0.0342, 0.0486, 0.0665, 0.0488, 0.0636, 0.0478], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 04:37:13,842 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3864, 3.8839, 3.9084, 2.9832, 3.3708, 3.2127, 2.2798, 2.3118], device='cuda:3'), covar=tensor([0.0231, 0.0146, 0.0099, 0.0324, 0.0285, 0.0214, 0.0757, 0.0819], device='cuda:3'), in_proj_covar=tensor([0.0062, 0.0049, 0.0051, 0.0061, 0.0082, 0.0059, 0.0072, 0.0078], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0006, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 04:37:15,397 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-09 04:37:25,619 INFO [train.py:898] (3/4) Epoch 13, batch 1900, loss[loss=0.1776, simple_loss=0.2635, pruned_loss=0.04587, over 18376.00 frames. ], tot_loss[loss=0.1824, simple_loss=0.2685, pruned_loss=0.0482, over 3598286.58 frames. ], batch size: 50, lr: 8.74e-03, grad_scale: 8.0 2023-03-09 04:37:50,465 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3153, 5.1755, 5.5042, 5.4481, 5.2917, 6.1545, 5.7452, 5.4426], device='cuda:3'), covar=tensor([0.1120, 0.0584, 0.0684, 0.0613, 0.1449, 0.0687, 0.0523, 0.1517], device='cuda:3'), in_proj_covar=tensor([0.0326, 0.0257, 0.0270, 0.0270, 0.0312, 0.0377, 0.0251, 0.0370], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0004, 0.0002, 0.0003], device='cuda:3') 2023-03-09 04:38:18,702 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.016e+02 2.946e+02 3.593e+02 4.543e+02 1.018e+03, threshold=7.187e+02, percent-clipped=4.0 2023-03-09 04:38:24,488 INFO [train.py:898] (3/4) Epoch 13, batch 1950, loss[loss=0.174, simple_loss=0.2588, pruned_loss=0.04462, over 18245.00 frames. ], tot_loss[loss=0.1826, simple_loss=0.2687, pruned_loss=0.04825, over 3597413.06 frames. ], batch size: 45, lr: 8.74e-03, grad_scale: 8.0 2023-03-09 04:38:59,343 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9705, 5.1506, 2.5130, 4.9681, 4.8767, 5.1640, 4.9623, 2.5148], device='cuda:3'), covar=tensor([0.0191, 0.0080, 0.0879, 0.0093, 0.0073, 0.0061, 0.0087, 0.1060], device='cuda:3'), in_proj_covar=tensor([0.0079, 0.0070, 0.0089, 0.0086, 0.0079, 0.0067, 0.0078, 0.0092], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 04:39:18,572 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0834, 4.2441, 2.3099, 4.1987, 5.2116, 2.2642, 3.6263, 3.9734], device='cuda:3'), covar=tensor([0.0098, 0.1000, 0.1627, 0.0546, 0.0049, 0.1334, 0.0730, 0.0689], device='cuda:3'), in_proj_covar=tensor([0.0129, 0.0241, 0.0195, 0.0191, 0.0095, 0.0175, 0.0205, 0.0208], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 04:39:23,850 INFO [train.py:898] (3/4) Epoch 13, batch 2000, loss[loss=0.1832, simple_loss=0.2736, pruned_loss=0.04637, over 18467.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.2702, pruned_loss=0.049, over 3589617.55 frames. ], batch size: 53, lr: 8.73e-03, grad_scale: 8.0 2023-03-09 04:39:25,405 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8182, 4.7549, 4.8581, 4.6058, 4.5904, 4.6393, 4.9694, 4.9919], device='cuda:3'), covar=tensor([0.0065, 0.0071, 0.0074, 0.0109, 0.0074, 0.0132, 0.0075, 0.0089], device='cuda:3'), in_proj_covar=tensor([0.0084, 0.0059, 0.0062, 0.0079, 0.0065, 0.0091, 0.0076, 0.0076], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 04:40:04,664 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9741, 2.9826, 4.3708, 4.0177, 2.8550, 4.8028, 4.1780, 3.0763], device='cuda:3'), covar=tensor([0.0380, 0.1467, 0.0272, 0.0361, 0.1508, 0.0165, 0.0405, 0.0980], device='cuda:3'), in_proj_covar=tensor([0.0195, 0.0225, 0.0165, 0.0147, 0.0214, 0.0193, 0.0214, 0.0192], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 04:40:17,736 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.026e+02 2.791e+02 3.256e+02 3.955e+02 1.111e+03, threshold=6.512e+02, percent-clipped=4.0 2023-03-09 04:40:23,302 INFO [train.py:898] (3/4) Epoch 13, batch 2050, loss[loss=0.2003, simple_loss=0.2889, pruned_loss=0.05588, over 18576.00 frames. ], tot_loss[loss=0.1831, simple_loss=0.2691, pruned_loss=0.0485, over 3599795.47 frames. ], batch size: 54, lr: 8.73e-03, grad_scale: 8.0 2023-03-09 04:40:35,307 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.03 vs. limit=5.0 2023-03-09 04:40:46,583 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6902, 3.6641, 3.4620, 3.1974, 3.3726, 2.6970, 2.6913, 3.7051], device='cuda:3'), covar=tensor([0.0049, 0.0074, 0.0077, 0.0112, 0.0083, 0.0187, 0.0170, 0.0057], device='cuda:3'), in_proj_covar=tensor([0.0104, 0.0126, 0.0110, 0.0158, 0.0110, 0.0157, 0.0159, 0.0094], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0001, 0.0002, 0.0001, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 04:41:14,603 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=45702.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:41:17,262 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=45704.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 04:41:22,659 INFO [train.py:898] (3/4) Epoch 13, batch 2100, loss[loss=0.1979, simple_loss=0.2851, pruned_loss=0.05537, over 16118.00 frames. ], tot_loss[loss=0.1831, simple_loss=0.2692, pruned_loss=0.04848, over 3596798.72 frames. ], batch size: 94, lr: 8.72e-03, grad_scale: 8.0 2023-03-09 04:41:26,446 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=45712.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 04:41:38,918 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.3080, 4.7175, 4.3684, 4.6235, 4.4154, 4.4084, 4.7843, 4.7165], device='cuda:3'), covar=tensor([0.1210, 0.0797, 0.1780, 0.0712, 0.1490, 0.0780, 0.0768, 0.0755], device='cuda:3'), in_proj_covar=tensor([0.0553, 0.0461, 0.0347, 0.0495, 0.0677, 0.0500, 0.0652, 0.0485], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 04:41:55,637 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=45737.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:42:14,743 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.079e+02 3.037e+02 3.571e+02 4.287e+02 1.143e+03, threshold=7.142e+02, percent-clipped=3.0 2023-03-09 04:42:21,819 INFO [train.py:898] (3/4) Epoch 13, batch 2150, loss[loss=0.1778, simple_loss=0.2591, pruned_loss=0.04818, over 17717.00 frames. ], tot_loss[loss=0.1824, simple_loss=0.2685, pruned_loss=0.04812, over 3599708.30 frames. ], batch size: 39, lr: 8.72e-03, grad_scale: 8.0 2023-03-09 04:42:23,181 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=45760.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:42:24,521 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0334, 4.0475, 2.5172, 4.2148, 5.1591, 2.5828, 3.3239, 3.7220], device='cuda:3'), covar=tensor([0.0127, 0.1334, 0.1518, 0.0557, 0.0060, 0.1268, 0.0790, 0.0782], device='cuda:3'), in_proj_covar=tensor([0.0130, 0.0244, 0.0196, 0.0192, 0.0096, 0.0177, 0.0206, 0.0211], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 04:42:26,751 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=45763.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:42:38,829 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=45774.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:42:47,777 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2108, 4.2607, 2.4565, 4.2374, 5.2403, 2.3511, 3.7949, 3.9671], device='cuda:3'), covar=tensor([0.0081, 0.1052, 0.1577, 0.0566, 0.0047, 0.1329, 0.0677, 0.0767], device='cuda:3'), in_proj_covar=tensor([0.0129, 0.0243, 0.0196, 0.0192, 0.0096, 0.0176, 0.0205, 0.0211], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 04:42:51,019 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=45785.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:43:16,107 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3054, 5.2762, 4.8191, 5.1968, 5.2129, 4.5813, 5.1054, 4.9424], device='cuda:3'), covar=tensor([0.0411, 0.0424, 0.1448, 0.0756, 0.0526, 0.0416, 0.0432, 0.0912], device='cuda:3'), in_proj_covar=tensor([0.0421, 0.0489, 0.0646, 0.0384, 0.0379, 0.0442, 0.0468, 0.0607], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 04:43:20,398 INFO [train.py:898] (3/4) Epoch 13, batch 2200, loss[loss=0.1729, simple_loss=0.2655, pruned_loss=0.04013, over 16249.00 frames. ], tot_loss[loss=0.1829, simple_loss=0.2689, pruned_loss=0.0485, over 3584037.36 frames. ], batch size: 94, lr: 8.72e-03, grad_scale: 8.0 2023-03-09 04:43:34,478 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.75 vs. limit=5.0 2023-03-09 04:43:50,800 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=45835.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:44:12,835 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.979e+02 3.106e+02 3.775e+02 4.323e+02 7.775e+02, threshold=7.551e+02, percent-clipped=4.0 2023-03-09 04:44:15,872 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-09 04:44:18,536 INFO [train.py:898] (3/4) Epoch 13, batch 2250, loss[loss=0.1812, simple_loss=0.2744, pruned_loss=0.04398, over 18630.00 frames. ], tot_loss[loss=0.1819, simple_loss=0.268, pruned_loss=0.04787, over 3597615.74 frames. ], batch size: 52, lr: 8.71e-03, grad_scale: 8.0 2023-03-09 04:44:18,847 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=45859.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:45:17,118 INFO [train.py:898] (3/4) Epoch 13, batch 2300, loss[loss=0.1657, simple_loss=0.2484, pruned_loss=0.04154, over 18363.00 frames. ], tot_loss[loss=0.182, simple_loss=0.2679, pruned_loss=0.04804, over 3598846.95 frames. ], batch size: 46, lr: 8.71e-03, grad_scale: 8.0 2023-03-09 04:45:31,277 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=45920.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:46:10,647 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.145e+02 3.100e+02 3.461e+02 4.036e+02 9.492e+02, threshold=6.921e+02, percent-clipped=1.0 2023-03-09 04:46:16,240 INFO [train.py:898] (3/4) Epoch 13, batch 2350, loss[loss=0.1734, simple_loss=0.2526, pruned_loss=0.04714, over 18492.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.2681, pruned_loss=0.04811, over 3610343.10 frames. ], batch size: 44, lr: 8.70e-03, grad_scale: 8.0 2023-03-09 04:47:14,966 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=46004.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 04:47:20,471 INFO [train.py:898] (3/4) Epoch 13, batch 2400, loss[loss=0.1539, simple_loss=0.2393, pruned_loss=0.03418, over 18451.00 frames. ], tot_loss[loss=0.1819, simple_loss=0.268, pruned_loss=0.04794, over 3604240.75 frames. ], batch size: 43, lr: 8.70e-03, grad_scale: 8.0 2023-03-09 04:48:11,658 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=46052.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 04:48:14,682 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.917e+02 3.063e+02 3.864e+02 4.591e+02 1.180e+03, threshold=7.727e+02, percent-clipped=5.0 2023-03-09 04:48:18,388 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=46058.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:48:19,204 INFO [train.py:898] (3/4) Epoch 13, batch 2450, loss[loss=0.1643, simple_loss=0.24, pruned_loss=0.04432, over 18256.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.268, pruned_loss=0.04773, over 3602926.84 frames. ], batch size: 45, lr: 8.69e-03, grad_scale: 8.0 2023-03-09 04:49:18,291 INFO [train.py:898] (3/4) Epoch 13, batch 2500, loss[loss=0.1898, simple_loss=0.2765, pruned_loss=0.05158, over 18629.00 frames. ], tot_loss[loss=0.1813, simple_loss=0.2678, pruned_loss=0.04741, over 3599490.30 frames. ], batch size: 52, lr: 8.69e-03, grad_scale: 8.0 2023-03-09 04:49:42,833 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=46130.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:50:11,944 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.884e+02 3.026e+02 3.575e+02 4.443e+02 8.459e+02, threshold=7.149e+02, percent-clipped=1.0 2023-03-09 04:50:16,971 INFO [train.py:898] (3/4) Epoch 13, batch 2550, loss[loss=0.2205, simple_loss=0.3029, pruned_loss=0.06904, over 18513.00 frames. ], tot_loss[loss=0.1819, simple_loss=0.2685, pruned_loss=0.04762, over 3607625.61 frames. ], batch size: 53, lr: 8.68e-03, grad_scale: 8.0 2023-03-09 04:51:05,799 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.57 vs. limit=5.0 2023-03-09 04:51:15,929 INFO [train.py:898] (3/4) Epoch 13, batch 2600, loss[loss=0.173, simple_loss=0.264, pruned_loss=0.041, over 17119.00 frames. ], tot_loss[loss=0.1813, simple_loss=0.2677, pruned_loss=0.04745, over 3591310.20 frames. ], batch size: 78, lr: 8.68e-03, grad_scale: 8.0 2023-03-09 04:51:23,618 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=46215.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:52:10,072 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.796e+02 3.019e+02 3.586e+02 4.587e+02 8.864e+02, threshold=7.172e+02, percent-clipped=4.0 2023-03-09 04:52:14,633 INFO [train.py:898] (3/4) Epoch 13, batch 2650, loss[loss=0.1716, simple_loss=0.2526, pruned_loss=0.04529, over 18258.00 frames. ], tot_loss[loss=0.1809, simple_loss=0.2672, pruned_loss=0.04734, over 3588869.73 frames. ], batch size: 47, lr: 8.67e-03, grad_scale: 8.0 2023-03-09 04:52:35,491 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=46276.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:52:43,662 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4995, 5.4251, 5.0537, 5.3977, 5.3927, 4.7275, 5.3303, 5.0741], device='cuda:3'), covar=tensor([0.0365, 0.0435, 0.1182, 0.0660, 0.0514, 0.0448, 0.0379, 0.0912], device='cuda:3'), in_proj_covar=tensor([0.0416, 0.0485, 0.0637, 0.0384, 0.0371, 0.0440, 0.0467, 0.0600], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 04:53:14,123 INFO [train.py:898] (3/4) Epoch 13, batch 2700, loss[loss=0.1899, simple_loss=0.2782, pruned_loss=0.0508, over 17576.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.2667, pruned_loss=0.04732, over 3588074.09 frames. ], batch size: 70, lr: 8.67e-03, grad_scale: 8.0 2023-03-09 04:53:47,868 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=46337.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:54:08,566 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.122e+02 2.828e+02 3.269e+02 4.141e+02 7.295e+02, threshold=6.537e+02, percent-clipped=1.0 2023-03-09 04:54:12,458 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=46358.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:54:13,366 INFO [train.py:898] (3/4) Epoch 13, batch 2750, loss[loss=0.1915, simple_loss=0.2791, pruned_loss=0.05197, over 18209.00 frames. ], tot_loss[loss=0.1802, simple_loss=0.2663, pruned_loss=0.04708, over 3591831.65 frames. ], batch size: 60, lr: 8.66e-03, grad_scale: 8.0 2023-03-09 04:54:48,379 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6311, 3.5077, 2.2538, 4.4858, 3.0756, 4.2704, 2.2763, 3.9808], device='cuda:3'), covar=tensor([0.0543, 0.0728, 0.1311, 0.0363, 0.0742, 0.0339, 0.1110, 0.0355], device='cuda:3'), in_proj_covar=tensor([0.0200, 0.0216, 0.0182, 0.0250, 0.0180, 0.0249, 0.0192, 0.0188], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 04:55:09,400 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=46406.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:55:12,679 INFO [train.py:898] (3/4) Epoch 13, batch 2800, loss[loss=0.1624, simple_loss=0.2414, pruned_loss=0.04169, over 18399.00 frames. ], tot_loss[loss=0.1798, simple_loss=0.2659, pruned_loss=0.04688, over 3595158.49 frames. ], batch size: 42, lr: 8.66e-03, grad_scale: 8.0 2023-03-09 04:55:38,042 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=46430.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:56:07,314 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.145e+02 3.038e+02 3.515e+02 4.163e+02 1.533e+03, threshold=7.029e+02, percent-clipped=2.0 2023-03-09 04:56:11,742 INFO [train.py:898] (3/4) Epoch 13, batch 2850, loss[loss=0.2006, simple_loss=0.2825, pruned_loss=0.05933, over 18278.00 frames. ], tot_loss[loss=0.1806, simple_loss=0.267, pruned_loss=0.04713, over 3602325.34 frames. ], batch size: 57, lr: 8.65e-03, grad_scale: 8.0 2023-03-09 04:56:34,784 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=46478.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:56:43,468 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.47 vs. limit=5.0 2023-03-09 04:57:08,962 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6613, 2.2049, 2.6394, 2.6460, 3.2998, 5.0989, 4.6611, 3.9655], device='cuda:3'), covar=tensor([0.1380, 0.2048, 0.2584, 0.1573, 0.2032, 0.0177, 0.0374, 0.0537], device='cuda:3'), in_proj_covar=tensor([0.0260, 0.0314, 0.0336, 0.0254, 0.0367, 0.0200, 0.0272, 0.0220], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 04:57:10,805 INFO [train.py:898] (3/4) Epoch 13, batch 2900, loss[loss=0.1568, simple_loss=0.2464, pruned_loss=0.0336, over 18551.00 frames. ], tot_loss[loss=0.1803, simple_loss=0.2663, pruned_loss=0.04718, over 3589489.95 frames. ], batch size: 49, lr: 8.65e-03, grad_scale: 8.0 2023-03-09 04:57:18,158 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=46515.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:58:05,311 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.838e+02 2.971e+02 3.539e+02 4.206e+02 8.689e+02, threshold=7.079e+02, percent-clipped=3.0 2023-03-09 04:58:09,927 INFO [train.py:898] (3/4) Epoch 13, batch 2950, loss[loss=0.1986, simple_loss=0.2781, pruned_loss=0.05957, over 12347.00 frames. ], tot_loss[loss=0.1805, simple_loss=0.2666, pruned_loss=0.04717, over 3590744.57 frames. ], batch size: 129, lr: 8.65e-03, grad_scale: 8.0 2023-03-09 04:58:14,676 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=46563.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:58:52,069 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=46595.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:58:56,109 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8317, 3.4983, 2.0611, 4.4165, 3.2950, 4.0728, 2.0734, 3.8985], device='cuda:3'), covar=tensor([0.0464, 0.0673, 0.1510, 0.0410, 0.0649, 0.0341, 0.1374, 0.0390], device='cuda:3'), in_proj_covar=tensor([0.0200, 0.0216, 0.0183, 0.0252, 0.0181, 0.0249, 0.0192, 0.0189], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 04:59:01,151 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=46602.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:59:09,179 INFO [train.py:898] (3/4) Epoch 13, batch 3000, loss[loss=0.2132, simple_loss=0.304, pruned_loss=0.06118, over 18459.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.2668, pruned_loss=0.04728, over 3590844.25 frames. ], batch size: 59, lr: 8.64e-03, grad_scale: 8.0 2023-03-09 04:59:09,179 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 04:59:21,025 INFO [train.py:932] (3/4) Epoch 13, validation: loss=0.1542, simple_loss=0.256, pruned_loss=0.02615, over 944034.00 frames. 2023-03-09 04:59:21,026 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 04:59:43,623 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5527, 2.8660, 2.4867, 2.7948, 3.5805, 3.5517, 3.0083, 2.8703], device='cuda:3'), covar=tensor([0.0159, 0.0209, 0.0510, 0.0330, 0.0130, 0.0118, 0.0377, 0.0347], device='cuda:3'), in_proj_covar=tensor([0.0122, 0.0113, 0.0153, 0.0144, 0.0108, 0.0097, 0.0138, 0.0136], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 04:59:47,881 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=46632.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 04:59:57,832 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-09 05:00:15,393 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.844e+02 2.916e+02 3.524e+02 4.268e+02 9.781e+02, threshold=7.048e+02, percent-clipped=5.0 2023-03-09 05:00:17,034 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=46656.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 05:00:19,948 INFO [train.py:898] (3/4) Epoch 13, batch 3050, loss[loss=0.2058, simple_loss=0.2986, pruned_loss=0.05644, over 18353.00 frames. ], tot_loss[loss=0.1806, simple_loss=0.2668, pruned_loss=0.04721, over 3578937.90 frames. ], batch size: 56, lr: 8.64e-03, grad_scale: 8.0 2023-03-09 05:00:24,794 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=46663.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:01:11,974 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5313, 2.8504, 2.5180, 2.7833, 3.5926, 3.5872, 3.0590, 2.8418], device='cuda:3'), covar=tensor([0.0169, 0.0274, 0.0526, 0.0355, 0.0164, 0.0118, 0.0311, 0.0307], device='cuda:3'), in_proj_covar=tensor([0.0121, 0.0113, 0.0152, 0.0143, 0.0107, 0.0096, 0.0137, 0.0135], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 05:01:18,395 INFO [train.py:898] (3/4) Epoch 13, batch 3100, loss[loss=0.1815, simple_loss=0.2719, pruned_loss=0.04561, over 18222.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.268, pruned_loss=0.04771, over 3573006.27 frames. ], batch size: 60, lr: 8.63e-03, grad_scale: 8.0 2023-03-09 05:01:38,840 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8233, 3.4780, 2.2641, 4.5011, 3.1296, 4.4808, 2.7078, 4.0579], device='cuda:3'), covar=tensor([0.0463, 0.0841, 0.1426, 0.0436, 0.0822, 0.0250, 0.1008, 0.0355], device='cuda:3'), in_proj_covar=tensor([0.0200, 0.0217, 0.0183, 0.0253, 0.0182, 0.0251, 0.0193, 0.0191], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 05:02:01,573 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9208, 5.4260, 5.4824, 5.4290, 5.0429, 5.3428, 4.7647, 5.2949], device='cuda:3'), covar=tensor([0.0200, 0.0249, 0.0164, 0.0290, 0.0323, 0.0180, 0.1000, 0.0269], device='cuda:3'), in_proj_covar=tensor([0.0189, 0.0236, 0.0221, 0.0270, 0.0238, 0.0233, 0.0291, 0.0225], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 05:02:12,197 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.124e+02 3.204e+02 3.767e+02 4.418e+02 1.241e+03, threshold=7.535e+02, percent-clipped=5.0 2023-03-09 05:02:16,820 INFO [train.py:898] (3/4) Epoch 13, batch 3150, loss[loss=0.1952, simple_loss=0.2848, pruned_loss=0.05276, over 18349.00 frames. ], tot_loss[loss=0.1814, simple_loss=0.2677, pruned_loss=0.04758, over 3578505.26 frames. ], batch size: 55, lr: 8.63e-03, grad_scale: 8.0 2023-03-09 05:03:07,937 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6953, 4.4072, 4.6198, 3.2640, 3.8431, 3.4600, 2.5863, 2.4351], device='cuda:3'), covar=tensor([0.0261, 0.0233, 0.0071, 0.0325, 0.0327, 0.0261, 0.0762, 0.0886], device='cuda:3'), in_proj_covar=tensor([0.0061, 0.0050, 0.0052, 0.0062, 0.0084, 0.0060, 0.0072, 0.0079], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0006, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 05:03:16,828 INFO [train.py:898] (3/4) Epoch 13, batch 3200, loss[loss=0.1949, simple_loss=0.286, pruned_loss=0.05196, over 17159.00 frames. ], tot_loss[loss=0.1815, simple_loss=0.2678, pruned_loss=0.04761, over 3569818.36 frames. ], batch size: 78, lr: 8.62e-03, grad_scale: 8.0 2023-03-09 05:03:18,400 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=46810.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:04:10,410 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.338e+02 3.187e+02 3.650e+02 4.694e+02 1.027e+03, threshold=7.300e+02, percent-clipped=4.0 2023-03-09 05:04:15,566 INFO [train.py:898] (3/4) Epoch 13, batch 3250, loss[loss=0.234, simple_loss=0.296, pruned_loss=0.08598, over 12659.00 frames. ], tot_loss[loss=0.1816, simple_loss=0.2676, pruned_loss=0.04784, over 3581528.28 frames. ], batch size: 131, lr: 8.62e-03, grad_scale: 8.0 2023-03-09 05:04:29,732 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6996, 3.5946, 3.4057, 3.0568, 3.4252, 2.7383, 2.7710, 3.6980], device='cuda:3'), covar=tensor([0.0040, 0.0065, 0.0075, 0.0120, 0.0067, 0.0163, 0.0165, 0.0040], device='cuda:3'), in_proj_covar=tensor([0.0109, 0.0132, 0.0113, 0.0165, 0.0115, 0.0161, 0.0163, 0.0098], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 05:04:29,737 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=46871.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:05:12,535 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-09 05:05:14,056 INFO [train.py:898] (3/4) Epoch 13, batch 3300, loss[loss=0.175, simple_loss=0.2596, pruned_loss=0.0452, over 18537.00 frames. ], tot_loss[loss=0.1819, simple_loss=0.268, pruned_loss=0.04793, over 3586280.01 frames. ], batch size: 49, lr: 8.61e-03, grad_scale: 8.0 2023-03-09 05:05:41,182 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=46932.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:05:48,138 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5379, 3.4715, 3.4324, 2.9577, 3.3192, 2.6840, 2.6680, 3.5667], device='cuda:3'), covar=tensor([0.0054, 0.0076, 0.0060, 0.0129, 0.0073, 0.0167, 0.0169, 0.0044], device='cuda:3'), in_proj_covar=tensor([0.0109, 0.0132, 0.0113, 0.0164, 0.0115, 0.0161, 0.0163, 0.0097], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 05:05:52,985 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2378, 5.1298, 5.4117, 5.4172, 5.1884, 5.9978, 5.6035, 5.4137], device='cuda:3'), covar=tensor([0.0923, 0.0609, 0.0633, 0.0724, 0.1400, 0.0705, 0.0592, 0.1381], device='cuda:3'), in_proj_covar=tensor([0.0319, 0.0250, 0.0267, 0.0266, 0.0303, 0.0375, 0.0245, 0.0367], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0002, 0.0003, 0.0004, 0.0002, 0.0003], device='cuda:3') 2023-03-09 05:05:59,323 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-09 05:06:03,351 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=46951.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 05:06:07,665 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.054e+02 2.913e+02 3.556e+02 4.553e+02 1.517e+03, threshold=7.113e+02, percent-clipped=7.0 2023-03-09 05:06:11,435 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=46958.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:06:12,394 INFO [train.py:898] (3/4) Epoch 13, batch 3350, loss[loss=0.1672, simple_loss=0.2559, pruned_loss=0.03927, over 18407.00 frames. ], tot_loss[loss=0.1808, simple_loss=0.2672, pruned_loss=0.04717, over 3598493.18 frames. ], batch size: 52, lr: 8.61e-03, grad_scale: 8.0 2023-03-09 05:06:17,144 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-09 05:06:38,287 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=46980.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:07:12,263 INFO [train.py:898] (3/4) Epoch 13, batch 3400, loss[loss=0.1798, simple_loss=0.2703, pruned_loss=0.04461, over 18290.00 frames. ], tot_loss[loss=0.1808, simple_loss=0.2672, pruned_loss=0.04723, over 3584726.92 frames. ], batch size: 60, lr: 8.60e-03, grad_scale: 8.0 2023-03-09 05:07:16,528 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-09 05:08:07,262 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.855e+02 2.739e+02 3.192e+02 3.909e+02 7.528e+02, threshold=6.383e+02, percent-clipped=0.0 2023-03-09 05:08:11,935 INFO [train.py:898] (3/4) Epoch 13, batch 3450, loss[loss=0.1619, simple_loss=0.2377, pruned_loss=0.0431, over 17757.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.2668, pruned_loss=0.04729, over 3572109.93 frames. ], batch size: 39, lr: 8.60e-03, grad_scale: 8.0 2023-03-09 05:08:12,363 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6380, 2.8772, 2.8180, 2.9014, 3.7251, 3.6542, 3.0956, 2.9657], device='cuda:3'), covar=tensor([0.0177, 0.0277, 0.0453, 0.0375, 0.0184, 0.0164, 0.0351, 0.0376], device='cuda:3'), in_proj_covar=tensor([0.0123, 0.0114, 0.0152, 0.0144, 0.0107, 0.0097, 0.0139, 0.0139], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 05:08:17,043 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8444, 4.5157, 4.6763, 3.4325, 3.8572, 3.4147, 2.9602, 2.5262], device='cuda:3'), covar=tensor([0.0227, 0.0144, 0.0069, 0.0308, 0.0295, 0.0257, 0.0680, 0.0896], device='cuda:3'), in_proj_covar=tensor([0.0061, 0.0050, 0.0052, 0.0062, 0.0084, 0.0060, 0.0073, 0.0080], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0006, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 05:08:29,384 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4493, 2.8386, 2.6830, 2.8877, 3.6221, 3.5942, 3.0360, 2.8650], device='cuda:3'), covar=tensor([0.0205, 0.0294, 0.0499, 0.0339, 0.0159, 0.0132, 0.0351, 0.0352], device='cuda:3'), in_proj_covar=tensor([0.0123, 0.0114, 0.0153, 0.0144, 0.0108, 0.0097, 0.0139, 0.0138], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 05:09:11,341 INFO [train.py:898] (3/4) Epoch 13, batch 3500, loss[loss=0.1821, simple_loss=0.2579, pruned_loss=0.05317, over 17681.00 frames. ], tot_loss[loss=0.181, simple_loss=0.2671, pruned_loss=0.04741, over 3569683.82 frames. ], batch size: 39, lr: 8.60e-03, grad_scale: 8.0 2023-03-09 05:09:51,547 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8984, 4.5426, 4.6596, 3.4428, 3.8554, 3.5303, 2.8678, 2.4696], device='cuda:3'), covar=tensor([0.0227, 0.0150, 0.0078, 0.0270, 0.0297, 0.0230, 0.0682, 0.0835], device='cuda:3'), in_proj_covar=tensor([0.0062, 0.0049, 0.0052, 0.0062, 0.0084, 0.0060, 0.0073, 0.0079], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0006, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 05:09:59,396 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.56 vs. limit=5.0 2023-03-09 05:10:03,843 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.269e+02 2.980e+02 3.583e+02 4.065e+02 6.381e+02, threshold=7.167e+02, percent-clipped=1.0 2023-03-09 05:10:08,140 INFO [train.py:898] (3/4) Epoch 13, batch 3550, loss[loss=0.2109, simple_loss=0.2914, pruned_loss=0.06517, over 15947.00 frames. ], tot_loss[loss=0.1808, simple_loss=0.2673, pruned_loss=0.04717, over 3584126.56 frames. ], batch size: 94, lr: 8.59e-03, grad_scale: 8.0 2023-03-09 05:10:08,442 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8197, 4.7996, 4.8322, 4.6333, 4.5710, 4.6704, 4.9880, 4.9869], device='cuda:3'), covar=tensor([0.0066, 0.0075, 0.0067, 0.0088, 0.0066, 0.0116, 0.0072, 0.0095], device='cuda:3'), in_proj_covar=tensor([0.0085, 0.0060, 0.0063, 0.0080, 0.0067, 0.0092, 0.0077, 0.0078], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 05:10:15,862 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=47166.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:11:02,842 INFO [train.py:898] (3/4) Epoch 13, batch 3600, loss[loss=0.1654, simple_loss=0.2532, pruned_loss=0.03878, over 18514.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.2674, pruned_loss=0.04703, over 3579024.01 frames. ], batch size: 47, lr: 8.59e-03, grad_scale: 8.0 2023-03-09 05:11:27,169 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.82 vs. limit=2.0 2023-03-09 05:11:36,448 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=47241.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:12:09,410 INFO [train.py:898] (3/4) Epoch 14, batch 0, loss[loss=0.1631, simple_loss=0.2389, pruned_loss=0.04363, over 18107.00 frames. ], tot_loss[loss=0.1631, simple_loss=0.2389, pruned_loss=0.04363, over 18107.00 frames. ], batch size: 40, lr: 8.27e-03, grad_scale: 8.0 2023-03-09 05:12:09,411 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 05:12:21,330 INFO [train.py:932] (3/4) Epoch 14, validation: loss=0.155, simple_loss=0.2569, pruned_loss=0.0266, over 944034.00 frames. 2023-03-09 05:12:21,330 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 05:12:31,285 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=47251.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:12:35,445 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.980e+02 3.395e+02 4.114e+02 5.070e+02 1.381e+03, threshold=8.228e+02, percent-clipped=11.0 2023-03-09 05:12:39,183 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=47258.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:13:19,477 INFO [train.py:898] (3/4) Epoch 14, batch 50, loss[loss=0.168, simple_loss=0.2565, pruned_loss=0.03976, over 18365.00 frames. ], tot_loss[loss=0.1797, simple_loss=0.2645, pruned_loss=0.04745, over 814652.62 frames. ], batch size: 50, lr: 8.27e-03, grad_scale: 8.0 2023-03-09 05:13:26,880 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=47299.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:13:30,344 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=47302.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:13:35,435 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=47306.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:14:18,532 INFO [train.py:898] (3/4) Epoch 14, batch 100, loss[loss=0.162, simple_loss=0.2343, pruned_loss=0.04486, over 17304.00 frames. ], tot_loss[loss=0.1813, simple_loss=0.2676, pruned_loss=0.04754, over 1431933.51 frames. ], batch size: 38, lr: 8.26e-03, grad_scale: 8.0 2023-03-09 05:14:32,861 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.774e+02 2.933e+02 3.364e+02 4.152e+02 6.819e+02, threshold=6.727e+02, percent-clipped=0.0 2023-03-09 05:15:02,503 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=47379.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:15:18,119 INFO [train.py:898] (3/4) Epoch 14, batch 150, loss[loss=0.1679, simple_loss=0.2599, pruned_loss=0.038, over 18230.00 frames. ], tot_loss[loss=0.1787, simple_loss=0.2649, pruned_loss=0.04625, over 1917827.20 frames. ], batch size: 60, lr: 8.26e-03, grad_scale: 8.0 2023-03-09 05:16:15,614 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=47440.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:16:15,885 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-09 05:16:18,707 INFO [train.py:898] (3/4) Epoch 14, batch 200, loss[loss=0.1593, simple_loss=0.2354, pruned_loss=0.04154, over 18426.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2643, pruned_loss=0.04623, over 2272318.09 frames. ], batch size: 42, lr: 8.25e-03, grad_scale: 8.0 2023-03-09 05:16:32,554 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.856e+02 2.798e+02 3.533e+02 4.067e+02 1.064e+03, threshold=7.066e+02, percent-clipped=5.0 2023-03-09 05:16:46,879 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=47466.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:17:18,346 INFO [train.py:898] (3/4) Epoch 14, batch 250, loss[loss=0.1783, simple_loss=0.2588, pruned_loss=0.04892, over 18589.00 frames. ], tot_loss[loss=0.1787, simple_loss=0.2646, pruned_loss=0.04638, over 2561375.49 frames. ], batch size: 45, lr: 8.25e-03, grad_scale: 8.0 2023-03-09 05:17:43,442 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=47514.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:18:16,892 INFO [train.py:898] (3/4) Epoch 14, batch 300, loss[loss=0.1795, simple_loss=0.2686, pruned_loss=0.04524, over 18348.00 frames. ], tot_loss[loss=0.1798, simple_loss=0.2664, pruned_loss=0.04662, over 2795412.81 frames. ], batch size: 55, lr: 8.24e-03, grad_scale: 8.0 2023-03-09 05:18:30,796 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.041e+02 3.040e+02 3.610e+02 4.364e+02 8.752e+02, threshold=7.221e+02, percent-clipped=2.0 2023-03-09 05:19:16,485 INFO [train.py:898] (3/4) Epoch 14, batch 350, loss[loss=0.1891, simple_loss=0.2801, pruned_loss=0.04905, over 18504.00 frames. ], tot_loss[loss=0.1798, simple_loss=0.2662, pruned_loss=0.04669, over 2972514.70 frames. ], batch size: 51, lr: 8.24e-03, grad_scale: 8.0 2023-03-09 05:19:21,105 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=47597.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:20:15,232 INFO [train.py:898] (3/4) Epoch 14, batch 400, loss[loss=0.1827, simple_loss=0.2743, pruned_loss=0.04557, over 18630.00 frames. ], tot_loss[loss=0.1791, simple_loss=0.2657, pruned_loss=0.0462, over 3118017.45 frames. ], batch size: 52, lr: 8.24e-03, grad_scale: 8.0 2023-03-09 05:20:28,437 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.702e+02 2.895e+02 3.399e+02 4.132e+02 9.040e+02, threshold=6.798e+02, percent-clipped=2.0 2023-03-09 05:20:44,999 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0227, 5.1332, 5.0270, 4.8659, 4.8142, 5.0360, 5.2389, 5.2157], device='cuda:3'), covar=tensor([0.0065, 0.0055, 0.0059, 0.0089, 0.0062, 0.0105, 0.0072, 0.0095], device='cuda:3'), in_proj_covar=tensor([0.0084, 0.0060, 0.0063, 0.0080, 0.0066, 0.0091, 0.0077, 0.0076], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 05:21:14,182 INFO [train.py:898] (3/4) Epoch 14, batch 450, loss[loss=0.1982, simple_loss=0.2853, pruned_loss=0.05556, over 18142.00 frames. ], tot_loss[loss=0.1795, simple_loss=0.2661, pruned_loss=0.04641, over 3220698.16 frames. ], batch size: 62, lr: 8.23e-03, grad_scale: 8.0 2023-03-09 05:21:31,812 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6687, 2.2251, 2.6247, 2.7530, 3.4868, 5.2210, 4.8989, 3.8475], device='cuda:3'), covar=tensor([0.1476, 0.2133, 0.2635, 0.1525, 0.1794, 0.0110, 0.0302, 0.0606], device='cuda:3'), in_proj_covar=tensor([0.0262, 0.0318, 0.0338, 0.0257, 0.0370, 0.0204, 0.0273, 0.0223], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 05:21:42,326 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-09 05:22:04,127 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=47735.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:22:05,488 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2955, 4.3790, 2.5260, 4.4018, 5.4157, 2.5943, 3.9006, 4.1991], device='cuda:3'), covar=tensor([0.0134, 0.1063, 0.1560, 0.0505, 0.0048, 0.1214, 0.0651, 0.0646], device='cuda:3'), in_proj_covar=tensor([0.0134, 0.0249, 0.0198, 0.0193, 0.0098, 0.0177, 0.0210, 0.0213], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 05:22:07,685 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=47738.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:22:12,864 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3839, 3.2968, 2.1745, 4.1835, 2.8617, 4.0977, 2.2145, 3.6973], device='cuda:3'), covar=tensor([0.0693, 0.0847, 0.1432, 0.0532, 0.0917, 0.0453, 0.1286, 0.0424], device='cuda:3'), in_proj_covar=tensor([0.0203, 0.0219, 0.0184, 0.0259, 0.0183, 0.0253, 0.0194, 0.0193], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 05:22:13,606 INFO [train.py:898] (3/4) Epoch 14, batch 500, loss[loss=0.1754, simple_loss=0.2621, pruned_loss=0.04434, over 18215.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2653, pruned_loss=0.04573, over 3311024.03 frames. ], batch size: 60, lr: 8.23e-03, grad_scale: 8.0 2023-03-09 05:22:27,493 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.904e+02 2.746e+02 3.457e+02 4.087e+02 8.380e+02, threshold=6.915e+02, percent-clipped=1.0 2023-03-09 05:22:29,172 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8240, 3.7772, 3.5620, 3.2089, 3.6306, 2.7160, 2.5392, 3.8065], device='cuda:3'), covar=tensor([0.0043, 0.0075, 0.0089, 0.0129, 0.0078, 0.0203, 0.0270, 0.0049], device='cuda:3'), in_proj_covar=tensor([0.0109, 0.0132, 0.0114, 0.0165, 0.0116, 0.0161, 0.0164, 0.0096], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 05:22:52,874 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8935, 3.2417, 4.3805, 4.0231, 3.0675, 4.8107, 3.9778, 3.1300], device='cuda:3'), covar=tensor([0.0423, 0.1225, 0.0300, 0.0322, 0.1347, 0.0158, 0.0498, 0.0972], device='cuda:3'), in_proj_covar=tensor([0.0196, 0.0225, 0.0169, 0.0146, 0.0212, 0.0191, 0.0217, 0.0191], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 05:23:06,911 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=47788.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:23:12,625 INFO [train.py:898] (3/4) Epoch 14, batch 550, loss[loss=0.1623, simple_loss=0.2448, pruned_loss=0.03994, over 18499.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2648, pruned_loss=0.04559, over 3383491.26 frames. ], batch size: 44, lr: 8.22e-03, grad_scale: 8.0 2023-03-09 05:23:20,539 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=47799.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:23:31,804 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=47809.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 05:24:03,695 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5871, 3.4919, 3.4137, 2.9812, 3.3681, 2.6208, 2.6853, 3.5575], device='cuda:3'), covar=tensor([0.0042, 0.0080, 0.0083, 0.0129, 0.0083, 0.0178, 0.0181, 0.0058], device='cuda:3'), in_proj_covar=tensor([0.0110, 0.0133, 0.0115, 0.0167, 0.0117, 0.0162, 0.0165, 0.0097], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 05:24:12,017 INFO [train.py:898] (3/4) Epoch 14, batch 600, loss[loss=0.1621, simple_loss=0.2398, pruned_loss=0.0422, over 17720.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.2658, pruned_loss=0.04594, over 3432621.14 frames. ], batch size: 39, lr: 8.22e-03, grad_scale: 8.0 2023-03-09 05:24:19,732 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=47849.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:24:22,464 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-09 05:24:25,846 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.360e+02 2.943e+02 3.448e+02 4.485e+02 1.001e+03, threshold=6.896e+02, percent-clipped=3.0 2023-03-09 05:24:43,073 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=47870.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 05:25:10,121 INFO [train.py:898] (3/4) Epoch 14, batch 650, loss[loss=0.1842, simple_loss=0.276, pruned_loss=0.04616, over 18409.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2653, pruned_loss=0.04573, over 3477644.93 frames. ], batch size: 52, lr: 8.21e-03, grad_scale: 8.0 2023-03-09 05:25:15,401 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=47897.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:26:09,518 INFO [train.py:898] (3/4) Epoch 14, batch 700, loss[loss=0.1594, simple_loss=0.2434, pruned_loss=0.03765, over 18272.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2647, pruned_loss=0.04561, over 3510440.36 frames. ], batch size: 45, lr: 8.21e-03, grad_scale: 8.0 2023-03-09 05:26:12,027 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=47945.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:26:23,809 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.052e+02 2.964e+02 3.596e+02 4.648e+02 7.874e+02, threshold=7.192e+02, percent-clipped=3.0 2023-03-09 05:27:08,302 INFO [train.py:898] (3/4) Epoch 14, batch 750, loss[loss=0.1528, simple_loss=0.2365, pruned_loss=0.03456, over 17707.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2652, pruned_loss=0.04578, over 3529070.07 frames. ], batch size: 39, lr: 8.21e-03, grad_scale: 16.0 2023-03-09 05:27:13,935 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8058, 3.6590, 3.5407, 3.0728, 3.3743, 2.6707, 2.6809, 3.6457], device='cuda:3'), covar=tensor([0.0038, 0.0072, 0.0064, 0.0116, 0.0084, 0.0193, 0.0184, 0.0054], device='cuda:3'), in_proj_covar=tensor([0.0111, 0.0135, 0.0116, 0.0167, 0.0118, 0.0164, 0.0165, 0.0098], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 05:27:21,974 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4387, 3.8250, 3.9666, 2.9786, 3.3574, 3.0712, 2.5247, 2.2875], device='cuda:3'), covar=tensor([0.0217, 0.0224, 0.0087, 0.0304, 0.0325, 0.0231, 0.0684, 0.0832], device='cuda:3'), in_proj_covar=tensor([0.0062, 0.0052, 0.0053, 0.0063, 0.0085, 0.0061, 0.0074, 0.0081], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0004, 0.0006, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 05:27:40,783 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.85 vs. limit=2.0 2023-03-09 05:28:03,252 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48035.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:28:12,890 INFO [train.py:898] (3/4) Epoch 14, batch 800, loss[loss=0.1975, simple_loss=0.289, pruned_loss=0.05301, over 16115.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2645, pruned_loss=0.04576, over 3533580.97 frames. ], batch size: 94, lr: 8.20e-03, grad_scale: 8.0 2023-03-09 05:28:28,351 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.329e+02 3.109e+02 3.556e+02 4.248e+02 1.366e+03, threshold=7.111e+02, percent-clipped=4.0 2023-03-09 05:28:48,051 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48072.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:29:00,401 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=48083.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:29:04,145 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4498, 2.9327, 2.1920, 2.8295, 3.6040, 3.5043, 3.1272, 2.9939], device='cuda:3'), covar=tensor([0.0177, 0.0213, 0.0648, 0.0280, 0.0122, 0.0125, 0.0293, 0.0244], device='cuda:3'), in_proj_covar=tensor([0.0122, 0.0114, 0.0152, 0.0142, 0.0107, 0.0096, 0.0136, 0.0137], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0002, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 05:29:11,643 INFO [train.py:898] (3/4) Epoch 14, batch 850, loss[loss=0.1891, simple_loss=0.2726, pruned_loss=0.0528, over 17844.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.2654, pruned_loss=0.04615, over 3542430.12 frames. ], batch size: 70, lr: 8.20e-03, grad_scale: 8.0 2023-03-09 05:29:13,139 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48094.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:29:16,302 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4904, 4.6252, 2.8023, 4.4997, 5.5553, 2.8453, 4.2099, 4.3667], device='cuda:3'), covar=tensor([0.0078, 0.0841, 0.1375, 0.0460, 0.0041, 0.1114, 0.0532, 0.0545], device='cuda:3'), in_proj_covar=tensor([0.0131, 0.0242, 0.0190, 0.0187, 0.0097, 0.0174, 0.0203, 0.0206], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 05:29:59,993 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48133.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:30:10,877 INFO [train.py:898] (3/4) Epoch 14, batch 900, loss[loss=0.1929, simple_loss=0.2811, pruned_loss=0.05229, over 17990.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2654, pruned_loss=0.04637, over 3536492.72 frames. ], batch size: 65, lr: 8.19e-03, grad_scale: 8.0 2023-03-09 05:30:12,250 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48144.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:30:25,238 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5108, 1.9849, 2.5722, 2.4979, 3.1534, 4.9885, 4.5471, 3.7213], device='cuda:3'), covar=tensor([0.1533, 0.2445, 0.2627, 0.1696, 0.2151, 0.0157, 0.0387, 0.0651], device='cuda:3'), in_proj_covar=tensor([0.0263, 0.0321, 0.0342, 0.0259, 0.0373, 0.0205, 0.0275, 0.0224], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 05:30:26,934 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.713e+02 2.967e+02 3.519e+02 4.098e+02 9.060e+02, threshold=7.037e+02, percent-clipped=1.0 2023-03-09 05:30:38,264 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48165.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 05:31:10,224 INFO [train.py:898] (3/4) Epoch 14, batch 950, loss[loss=0.2054, simple_loss=0.2922, pruned_loss=0.05927, over 18132.00 frames. ], tot_loss[loss=0.1782, simple_loss=0.2646, pruned_loss=0.04584, over 3548867.91 frames. ], batch size: 62, lr: 8.19e-03, grad_scale: 8.0 2023-03-09 05:31:34,401 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48212.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 05:32:04,360 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6864, 4.6693, 4.8146, 4.5724, 4.5871, 4.6179, 4.9127, 4.8939], device='cuda:3'), covar=tensor([0.0073, 0.0090, 0.0073, 0.0110, 0.0072, 0.0137, 0.0092, 0.0120], device='cuda:3'), in_proj_covar=tensor([0.0083, 0.0059, 0.0063, 0.0080, 0.0066, 0.0090, 0.0075, 0.0076], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 05:32:09,653 INFO [train.py:898] (3/4) Epoch 14, batch 1000, loss[loss=0.1565, simple_loss=0.2395, pruned_loss=0.03671, over 18351.00 frames. ], tot_loss[loss=0.1787, simple_loss=0.265, pruned_loss=0.04621, over 3534180.70 frames. ], batch size: 46, lr: 8.19e-03, grad_scale: 8.0 2023-03-09 05:32:26,019 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.854e+02 2.987e+02 3.542e+02 4.398e+02 9.134e+02, threshold=7.083e+02, percent-clipped=3.0 2023-03-09 05:32:46,769 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48273.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 05:33:09,316 INFO [train.py:898] (3/4) Epoch 14, batch 1050, loss[loss=0.1809, simple_loss=0.2745, pruned_loss=0.0436, over 18583.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.2652, pruned_loss=0.04617, over 3539835.13 frames. ], batch size: 54, lr: 8.18e-03, grad_scale: 8.0 2023-03-09 05:33:19,969 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6619, 3.1231, 4.1280, 3.7077, 2.5898, 4.4570, 3.8507, 2.6253], device='cuda:3'), covar=tensor([0.0431, 0.1155, 0.0225, 0.0348, 0.1481, 0.0186, 0.0503, 0.1150], device='cuda:3'), in_proj_covar=tensor([0.0195, 0.0226, 0.0170, 0.0146, 0.0213, 0.0191, 0.0218, 0.0191], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 05:33:47,278 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48324.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:34:08,862 INFO [train.py:898] (3/4) Epoch 14, batch 1100, loss[loss=0.1471, simple_loss=0.2255, pruned_loss=0.03436, over 18363.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.2638, pruned_loss=0.04524, over 3559249.94 frames. ], batch size: 46, lr: 8.18e-03, grad_scale: 8.0 2023-03-09 05:34:23,490 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.341e+02 3.220e+02 3.737e+02 4.236e+02 6.762e+02, threshold=7.474e+02, percent-clipped=0.0 2023-03-09 05:34:42,605 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3919, 3.1671, 2.0722, 4.0914, 2.8422, 4.0390, 2.1278, 3.7059], device='cuda:3'), covar=tensor([0.0601, 0.0791, 0.1314, 0.0440, 0.0802, 0.0230, 0.1177, 0.0333], device='cuda:3'), in_proj_covar=tensor([0.0198, 0.0213, 0.0179, 0.0253, 0.0180, 0.0248, 0.0192, 0.0190], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 05:34:57,302 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-09 05:34:59,472 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48385.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:35:08,046 INFO [train.py:898] (3/4) Epoch 14, batch 1150, loss[loss=0.1932, simple_loss=0.2849, pruned_loss=0.05076, over 17061.00 frames. ], tot_loss[loss=0.1782, simple_loss=0.2649, pruned_loss=0.04575, over 3562971.09 frames. ], batch size: 78, lr: 8.17e-03, grad_scale: 8.0 2023-03-09 05:35:09,513 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48394.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:35:12,844 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48397.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:35:44,619 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48423.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:35:49,932 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48428.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:36:06,309 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=48442.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:36:07,211 INFO [train.py:898] (3/4) Epoch 14, batch 1200, loss[loss=0.173, simple_loss=0.2601, pruned_loss=0.04292, over 18307.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.2639, pruned_loss=0.0455, over 3576774.51 frames. ], batch size: 49, lr: 8.17e-03, grad_scale: 8.0 2023-03-09 05:36:08,687 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48444.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:36:22,327 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.965e+02 3.035e+02 3.656e+02 4.283e+02 1.032e+03, threshold=7.311e+02, percent-clipped=2.0 2023-03-09 05:36:25,058 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48458.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:36:30,886 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4712, 3.6285, 5.0812, 4.3902, 3.3052, 3.1068, 4.4481, 5.1735], device='cuda:3'), covar=tensor([0.0831, 0.1572, 0.0121, 0.0326, 0.0890, 0.1016, 0.0366, 0.0282], device='cuda:3'), in_proj_covar=tensor([0.0138, 0.0254, 0.0117, 0.0167, 0.0180, 0.0180, 0.0181, 0.0164], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 05:36:33,510 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48465.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 05:36:57,022 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48484.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:37:05,995 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=48492.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:37:06,945 INFO [train.py:898] (3/4) Epoch 14, batch 1250, loss[loss=0.1837, simple_loss=0.2729, pruned_loss=0.04729, over 18229.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.2636, pruned_loss=0.04524, over 3579434.04 frames. ], batch size: 60, lr: 8.16e-03, grad_scale: 8.0 2023-03-09 05:37:23,685 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5697, 5.0190, 5.0031, 5.0214, 4.5722, 4.9272, 4.3456, 4.9190], device='cuda:3'), covar=tensor([0.0265, 0.0289, 0.0213, 0.0406, 0.0346, 0.0219, 0.1139, 0.0322], device='cuda:3'), in_proj_covar=tensor([0.0193, 0.0241, 0.0228, 0.0279, 0.0244, 0.0239, 0.0297, 0.0233], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 05:37:30,406 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=48513.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 05:37:48,071 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3195, 5.8473, 5.3344, 5.6149, 5.4158, 5.2412, 5.8981, 5.8393], device='cuda:3'), covar=tensor([0.1081, 0.0733, 0.0479, 0.0687, 0.1392, 0.0732, 0.0510, 0.0660], device='cuda:3'), in_proj_covar=tensor([0.0550, 0.0462, 0.0341, 0.0489, 0.0664, 0.0493, 0.0650, 0.0481], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 05:38:06,733 INFO [train.py:898] (3/4) Epoch 14, batch 1300, loss[loss=0.1686, simple_loss=0.2476, pruned_loss=0.04476, over 18419.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.2637, pruned_loss=0.04558, over 3565965.59 frames. ], batch size: 42, lr: 8.16e-03, grad_scale: 8.0 2023-03-09 05:38:21,325 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.028e+02 2.821e+02 3.494e+02 4.256e+02 8.799e+02, threshold=6.987e+02, percent-clipped=3.0 2023-03-09 05:38:27,280 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4604, 5.3204, 5.6092, 5.5803, 5.2790, 6.2079, 5.7185, 5.4347], device='cuda:3'), covar=tensor([0.0919, 0.0630, 0.0684, 0.0710, 0.1452, 0.0643, 0.0629, 0.1611], device='cuda:3'), in_proj_covar=tensor([0.0328, 0.0258, 0.0271, 0.0274, 0.0310, 0.0382, 0.0251, 0.0376], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0004, 0.0002, 0.0004], device='cuda:3') 2023-03-09 05:38:35,200 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48568.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 05:39:05,474 INFO [train.py:898] (3/4) Epoch 14, batch 1350, loss[loss=0.1728, simple_loss=0.2613, pruned_loss=0.04212, over 18411.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.264, pruned_loss=0.04538, over 3573347.83 frames. ], batch size: 52, lr: 8.16e-03, grad_scale: 8.0 2023-03-09 05:39:58,460 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5484, 5.4818, 5.1204, 5.4146, 5.4511, 4.8268, 5.3632, 5.1012], device='cuda:3'), covar=tensor([0.0349, 0.0400, 0.1191, 0.0824, 0.0506, 0.0407, 0.0422, 0.0944], device='cuda:3'), in_proj_covar=tensor([0.0437, 0.0505, 0.0665, 0.0389, 0.0385, 0.0458, 0.0486, 0.0629], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 05:40:05,008 INFO [train.py:898] (3/4) Epoch 14, batch 1400, loss[loss=0.2008, simple_loss=0.2926, pruned_loss=0.05447, over 18369.00 frames. ], tot_loss[loss=0.1778, simple_loss=0.2645, pruned_loss=0.0455, over 3570654.58 frames. ], batch size: 52, lr: 8.15e-03, grad_scale: 8.0 2023-03-09 05:40:19,851 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.683e+02 2.979e+02 3.618e+02 4.043e+02 7.873e+02, threshold=7.235e+02, percent-clipped=2.0 2023-03-09 05:40:49,400 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48680.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:41:03,768 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3311, 5.9553, 5.3683, 5.6538, 5.4620, 5.4268, 5.9634, 5.9407], device='cuda:3'), covar=tensor([0.1310, 0.0816, 0.0535, 0.0759, 0.1464, 0.0705, 0.0594, 0.0695], device='cuda:3'), in_proj_covar=tensor([0.0556, 0.0466, 0.0344, 0.0492, 0.0669, 0.0494, 0.0653, 0.0485], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 05:41:04,623 INFO [train.py:898] (3/4) Epoch 14, batch 1450, loss[loss=0.1832, simple_loss=0.2732, pruned_loss=0.04656, over 18374.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2655, pruned_loss=0.04568, over 3567454.28 frames. ], batch size: 52, lr: 8.15e-03, grad_scale: 8.0 2023-03-09 05:41:21,383 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.44 vs. limit=2.0 2023-03-09 05:41:45,661 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48728.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:41:56,115 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9718, 4.5502, 4.5930, 3.4852, 3.6867, 3.5228, 2.7154, 2.4808], device='cuda:3'), covar=tensor([0.0183, 0.0165, 0.0074, 0.0271, 0.0329, 0.0209, 0.0664, 0.0831], device='cuda:3'), in_proj_covar=tensor([0.0061, 0.0051, 0.0053, 0.0062, 0.0083, 0.0059, 0.0073, 0.0081], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0006, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 05:42:03,617 INFO [train.py:898] (3/4) Epoch 14, batch 1500, loss[loss=0.1972, simple_loss=0.2766, pruned_loss=0.05889, over 18431.00 frames. ], tot_loss[loss=0.1783, simple_loss=0.2655, pruned_loss=0.04561, over 3571659.37 frames. ], batch size: 48, lr: 8.14e-03, grad_scale: 8.0 2023-03-09 05:42:15,668 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48753.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:42:18,853 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.036e+02 2.981e+02 3.460e+02 4.226e+02 1.044e+03, threshold=6.921e+02, percent-clipped=2.0 2023-03-09 05:42:38,995 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9249, 3.8545, 3.6062, 3.2540, 3.5406, 2.9680, 3.0662, 3.8747], device='cuda:3'), covar=tensor([0.0040, 0.0073, 0.0075, 0.0101, 0.0069, 0.0145, 0.0133, 0.0046], device='cuda:3'), in_proj_covar=tensor([0.0112, 0.0136, 0.0116, 0.0167, 0.0119, 0.0165, 0.0166, 0.0098], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 05:42:42,173 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=48776.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:42:45,566 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48779.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:43:02,595 INFO [train.py:898] (3/4) Epoch 14, batch 1550, loss[loss=0.1593, simple_loss=0.2496, pruned_loss=0.03453, over 18397.00 frames. ], tot_loss[loss=0.1789, simple_loss=0.2661, pruned_loss=0.04583, over 3581652.30 frames. ], batch size: 48, lr: 8.14e-03, grad_scale: 8.0 2023-03-09 05:43:35,903 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48821.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:43:55,794 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48838.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:44:02,359 INFO [train.py:898] (3/4) Epoch 14, batch 1600, loss[loss=0.16, simple_loss=0.2399, pruned_loss=0.04006, over 18501.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.2655, pruned_loss=0.04579, over 3575315.04 frames. ], batch size: 44, lr: 8.14e-03, grad_scale: 8.0 2023-03-09 05:44:17,654 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.056e+02 2.816e+02 3.506e+02 4.400e+02 1.387e+03, threshold=7.012e+02, percent-clipped=5.0 2023-03-09 05:44:31,342 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48868.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 05:44:43,881 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4876, 2.7764, 3.9627, 3.6211, 2.5524, 4.2846, 3.7751, 2.7664], device='cuda:3'), covar=tensor([0.0478, 0.1300, 0.0263, 0.0342, 0.1471, 0.0204, 0.0515, 0.0958], device='cuda:3'), in_proj_covar=tensor([0.0199, 0.0228, 0.0172, 0.0149, 0.0216, 0.0190, 0.0220, 0.0192], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 05:44:47,358 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48882.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:44:59,846 INFO [train.py:898] (3/4) Epoch 14, batch 1650, loss[loss=0.1987, simple_loss=0.2895, pruned_loss=0.05395, over 16298.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.2656, pruned_loss=0.04599, over 3574111.89 frames. ], batch size: 94, lr: 8.13e-03, grad_scale: 8.0 2023-03-09 05:45:07,570 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48899.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:45:27,192 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=48916.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 05:45:58,702 INFO [train.py:898] (3/4) Epoch 14, batch 1700, loss[loss=0.171, simple_loss=0.2553, pruned_loss=0.04339, over 18269.00 frames. ], tot_loss[loss=0.1796, simple_loss=0.2662, pruned_loss=0.04648, over 3551926.54 frames. ], batch size: 47, lr: 8.13e-03, grad_scale: 8.0 2023-03-09 05:46:14,750 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-09 05:46:15,065 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.904e+02 3.125e+02 3.663e+02 4.529e+02 8.545e+02, threshold=7.326e+02, percent-clipped=5.0 2023-03-09 05:46:42,469 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48980.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:46:57,433 INFO [train.py:898] (3/4) Epoch 14, batch 1750, loss[loss=0.1962, simple_loss=0.2847, pruned_loss=0.05386, over 18489.00 frames. ], tot_loss[loss=0.1778, simple_loss=0.2645, pruned_loss=0.04558, over 3571406.59 frames. ], batch size: 53, lr: 8.12e-03, grad_scale: 8.0 2023-03-09 05:47:18,285 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.81 vs. limit=2.0 2023-03-09 05:47:23,777 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6790, 3.5997, 5.0172, 4.3225, 3.3442, 2.8516, 4.4589, 5.1182], device='cuda:3'), covar=tensor([0.0757, 0.1565, 0.0116, 0.0365, 0.0859, 0.1228, 0.0351, 0.0225], device='cuda:3'), in_proj_covar=tensor([0.0138, 0.0255, 0.0118, 0.0168, 0.0181, 0.0181, 0.0182, 0.0167], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 05:47:39,656 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=49028.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:47:56,917 INFO [train.py:898] (3/4) Epoch 14, batch 1800, loss[loss=0.1904, simple_loss=0.2833, pruned_loss=0.04873, over 18475.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2652, pruned_loss=0.04575, over 3576371.13 frames. ], batch size: 59, lr: 8.12e-03, grad_scale: 8.0 2023-03-09 05:48:09,252 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=49053.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:48:13,418 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.931e+02 2.916e+02 3.364e+02 3.970e+02 9.603e+02, threshold=6.727e+02, percent-clipped=4.0 2023-03-09 05:48:40,869 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=49079.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:48:56,137 INFO [train.py:898] (3/4) Epoch 14, batch 1850, loss[loss=0.1905, simple_loss=0.2805, pruned_loss=0.05019, over 16085.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.2655, pruned_loss=0.04587, over 3570547.43 frames. ], batch size: 94, lr: 8.12e-03, grad_scale: 8.0 2023-03-09 05:49:06,525 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=49101.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:49:37,851 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=49127.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:49:55,817 INFO [train.py:898] (3/4) Epoch 14, batch 1900, loss[loss=0.1609, simple_loss=0.2593, pruned_loss=0.03127, over 18381.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2655, pruned_loss=0.04567, over 3563531.12 frames. ], batch size: 52, lr: 8.11e-03, grad_scale: 8.0 2023-03-09 05:50:00,223 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2188, 3.1524, 2.0851, 4.0476, 2.7711, 3.8488, 2.0907, 3.4847], device='cuda:3'), covar=tensor([0.0682, 0.0779, 0.1305, 0.0471, 0.0739, 0.0268, 0.1239, 0.0378], device='cuda:3'), in_proj_covar=tensor([0.0200, 0.0216, 0.0181, 0.0257, 0.0181, 0.0251, 0.0195, 0.0190], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 05:50:11,574 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.722e+02 2.855e+02 3.283e+02 4.141e+02 6.742e+02, threshold=6.565e+02, percent-clipped=1.0 2023-03-09 05:50:29,122 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9768, 5.4282, 3.0388, 5.2494, 5.1488, 5.4643, 5.2351, 2.7584], device='cuda:3'), covar=tensor([0.0169, 0.0042, 0.0637, 0.0065, 0.0061, 0.0041, 0.0073, 0.0902], device='cuda:3'), in_proj_covar=tensor([0.0079, 0.0071, 0.0090, 0.0084, 0.0078, 0.0067, 0.0078, 0.0092], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 05:50:36,989 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=49177.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:50:54,815 INFO [train.py:898] (3/4) Epoch 14, batch 1950, loss[loss=0.2284, simple_loss=0.2947, pruned_loss=0.08099, over 12201.00 frames. ], tot_loss[loss=0.1785, simple_loss=0.2655, pruned_loss=0.04576, over 3549072.07 frames. ], batch size: 129, lr: 8.11e-03, grad_scale: 8.0 2023-03-09 05:50:56,172 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=49194.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:51:04,228 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0351, 5.1182, 5.3362, 5.2718, 4.9322, 5.9371, 5.5652, 5.1468], device='cuda:3'), covar=tensor([0.1138, 0.0693, 0.0710, 0.0688, 0.1494, 0.0708, 0.0604, 0.1607], device='cuda:3'), in_proj_covar=tensor([0.0328, 0.0258, 0.0271, 0.0272, 0.0305, 0.0380, 0.0252, 0.0374], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0004, 0.0002, 0.0003], device='cuda:3') 2023-03-09 05:51:05,384 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7956, 4.1006, 4.0992, 4.1434, 3.7218, 3.9985, 3.6831, 4.0372], device='cuda:3'), covar=tensor([0.0293, 0.0378, 0.0291, 0.0523, 0.0374, 0.0279, 0.1002, 0.0350], device='cuda:3'), in_proj_covar=tensor([0.0192, 0.0238, 0.0226, 0.0277, 0.0240, 0.0239, 0.0295, 0.0231], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 05:51:46,975 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=49236.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:51:54,543 INFO [train.py:898] (3/4) Epoch 14, batch 2000, loss[loss=0.2296, simple_loss=0.3051, pruned_loss=0.077, over 12517.00 frames. ], tot_loss[loss=0.1789, simple_loss=0.2659, pruned_loss=0.04595, over 3557667.64 frames. ], batch size: 131, lr: 8.10e-03, grad_scale: 8.0 2023-03-09 05:51:56,194 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6678, 2.1189, 2.5929, 2.7805, 3.3104, 4.9833, 4.7252, 3.7645], device='cuda:3'), covar=tensor([0.1465, 0.2328, 0.2824, 0.1564, 0.2120, 0.0145, 0.0360, 0.0660], device='cuda:3'), in_proj_covar=tensor([0.0261, 0.0316, 0.0338, 0.0256, 0.0367, 0.0205, 0.0274, 0.0222], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 05:52:09,966 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.669e+02 2.933e+02 3.417e+02 3.982e+02 2.319e+03, threshold=6.833e+02, percent-clipped=9.0 2023-03-09 05:52:53,830 INFO [train.py:898] (3/4) Epoch 14, batch 2050, loss[loss=0.1841, simple_loss=0.2763, pruned_loss=0.04598, over 18586.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.266, pruned_loss=0.04577, over 3564778.44 frames. ], batch size: 54, lr: 8.10e-03, grad_scale: 8.0 2023-03-09 05:52:58,948 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=49297.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:53:28,741 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9717, 4.9531, 5.0972, 4.7756, 4.7220, 4.9087, 5.1971, 5.1406], device='cuda:3'), covar=tensor([0.0063, 0.0070, 0.0050, 0.0110, 0.0069, 0.0108, 0.0087, 0.0094], device='cuda:3'), in_proj_covar=tensor([0.0084, 0.0060, 0.0063, 0.0081, 0.0066, 0.0090, 0.0076, 0.0077], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 05:53:54,012 INFO [train.py:898] (3/4) Epoch 14, batch 2100, loss[loss=0.1719, simple_loss=0.243, pruned_loss=0.05044, over 16745.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.2657, pruned_loss=0.04593, over 3568294.51 frames. ], batch size: 37, lr: 8.09e-03, grad_scale: 8.0 2023-03-09 05:54:08,752 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.284e+02 2.957e+02 3.376e+02 4.046e+02 6.341e+02, threshold=6.752e+02, percent-clipped=0.0 2023-03-09 05:54:19,628 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.81 vs. limit=2.0 2023-03-09 05:54:23,215 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-09 05:54:53,537 INFO [train.py:898] (3/4) Epoch 14, batch 2150, loss[loss=0.1888, simple_loss=0.2775, pruned_loss=0.05001, over 18399.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.265, pruned_loss=0.04594, over 3558035.97 frames. ], batch size: 52, lr: 8.09e-03, grad_scale: 8.0 2023-03-09 05:55:16,692 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9986, 3.7989, 4.9910, 2.7449, 4.3503, 2.6255, 3.2075, 2.0574], device='cuda:3'), covar=tensor([0.1005, 0.0856, 0.0122, 0.0840, 0.0635, 0.2477, 0.2447, 0.1773], device='cuda:3'), in_proj_covar=tensor([0.0206, 0.0225, 0.0139, 0.0178, 0.0240, 0.0254, 0.0300, 0.0218], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 05:55:27,930 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=49422.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:55:39,987 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=49432.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 05:55:44,048 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1217, 5.6322, 3.1941, 5.4084, 5.2782, 5.6600, 5.4723, 3.1611], device='cuda:3'), covar=tensor([0.0187, 0.0058, 0.0628, 0.0067, 0.0066, 0.0060, 0.0068, 0.0830], device='cuda:3'), in_proj_covar=tensor([0.0080, 0.0072, 0.0090, 0.0085, 0.0079, 0.0068, 0.0079, 0.0093], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 05:55:52,814 INFO [train.py:898] (3/4) Epoch 14, batch 2200, loss[loss=0.1988, simple_loss=0.2844, pruned_loss=0.05656, over 18190.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2649, pruned_loss=0.04594, over 3572082.74 frames. ], batch size: 60, lr: 8.09e-03, grad_scale: 8.0 2023-03-09 05:55:53,347 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7302, 3.7635, 5.1714, 4.4574, 3.1759, 3.0423, 4.5036, 5.2643], device='cuda:3'), covar=tensor([0.0754, 0.1482, 0.0097, 0.0312, 0.0993, 0.1083, 0.0325, 0.0163], device='cuda:3'), in_proj_covar=tensor([0.0138, 0.0255, 0.0119, 0.0168, 0.0181, 0.0180, 0.0183, 0.0166], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 05:55:58,803 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4771, 4.5361, 4.5429, 4.3713, 4.3189, 4.3557, 4.7156, 4.7223], device='cuda:3'), covar=tensor([0.0091, 0.0073, 0.0077, 0.0106, 0.0074, 0.0139, 0.0071, 0.0088], device='cuda:3'), in_proj_covar=tensor([0.0084, 0.0061, 0.0063, 0.0081, 0.0067, 0.0091, 0.0076, 0.0077], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 05:56:07,491 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.048e+02 2.848e+02 3.284e+02 4.322e+02 8.537e+02, threshold=6.567e+02, percent-clipped=2.0 2023-03-09 05:56:33,168 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=49477.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:56:40,662 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=49483.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:56:52,242 INFO [train.py:898] (3/4) Epoch 14, batch 2250, loss[loss=0.1833, simple_loss=0.2634, pruned_loss=0.05158, over 18375.00 frames. ], tot_loss[loss=0.1777, simple_loss=0.2641, pruned_loss=0.0456, over 3591309.25 frames. ], batch size: 50, lr: 8.08e-03, grad_scale: 8.0 2023-03-09 05:56:52,651 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=49493.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 05:56:53,672 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=49494.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:57:29,646 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=49525.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:57:49,601 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=49542.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:57:50,493 INFO [train.py:898] (3/4) Epoch 14, batch 2300, loss[loss=0.1693, simple_loss=0.2573, pruned_loss=0.04065, over 18252.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2653, pruned_loss=0.04573, over 3595522.31 frames. ], batch size: 45, lr: 8.08e-03, grad_scale: 8.0 2023-03-09 05:58:05,060 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.043e+02 3.294e+02 3.736e+02 4.526e+02 1.029e+03, threshold=7.472e+02, percent-clipped=8.0 2023-03-09 05:58:38,395 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.96 vs. limit=2.0 2023-03-09 05:58:47,680 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=49592.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 05:58:48,616 INFO [train.py:898] (3/4) Epoch 14, batch 2350, loss[loss=0.1774, simple_loss=0.27, pruned_loss=0.04241, over 18482.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2662, pruned_loss=0.04586, over 3595631.82 frames. ], batch size: 51, lr: 8.07e-03, grad_scale: 8.0 2023-03-09 05:59:48,059 INFO [train.py:898] (3/4) Epoch 14, batch 2400, loss[loss=0.1817, simple_loss=0.2686, pruned_loss=0.04737, over 18251.00 frames. ], tot_loss[loss=0.1785, simple_loss=0.2657, pruned_loss=0.04562, over 3599500.49 frames. ], batch size: 60, lr: 8.07e-03, grad_scale: 8.0 2023-03-09 05:59:51,302 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9839, 4.7399, 4.7870, 3.4117, 3.9190, 3.7292, 2.9067, 2.8037], device='cuda:3'), covar=tensor([0.0191, 0.0118, 0.0092, 0.0297, 0.0297, 0.0193, 0.0687, 0.0821], device='cuda:3'), in_proj_covar=tensor([0.0061, 0.0050, 0.0052, 0.0061, 0.0082, 0.0059, 0.0072, 0.0079], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0006, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 06:00:03,320 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.270e+02 3.059e+02 3.720e+02 4.555e+02 1.609e+03, threshold=7.441e+02, percent-clipped=3.0 2023-03-09 06:00:46,964 INFO [train.py:898] (3/4) Epoch 14, batch 2450, loss[loss=0.1672, simple_loss=0.2435, pruned_loss=0.04548, over 18380.00 frames. ], tot_loss[loss=0.1785, simple_loss=0.2656, pruned_loss=0.04565, over 3602299.52 frames. ], batch size: 42, lr: 8.07e-03, grad_scale: 8.0 2023-03-09 06:01:46,431 INFO [train.py:898] (3/4) Epoch 14, batch 2500, loss[loss=0.1858, simple_loss=0.282, pruned_loss=0.04481, over 18631.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.266, pruned_loss=0.04576, over 3597945.68 frames. ], batch size: 52, lr: 8.06e-03, grad_scale: 8.0 2023-03-09 06:02:01,765 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.909e+02 2.668e+02 3.143e+02 3.903e+02 7.073e+02, threshold=6.286e+02, percent-clipped=0.0 2023-03-09 06:02:13,397 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=49766.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:02:27,096 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=49778.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:02:39,199 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=49788.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 06:02:44,644 INFO [train.py:898] (3/4) Epoch 14, batch 2550, loss[loss=0.1908, simple_loss=0.2763, pruned_loss=0.05268, over 18351.00 frames. ], tot_loss[loss=0.1796, simple_loss=0.2668, pruned_loss=0.04623, over 3590601.20 frames. ], batch size: 56, lr: 8.06e-03, grad_scale: 8.0 2023-03-09 06:02:55,377 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5907, 3.4422, 1.8850, 4.4409, 3.0262, 4.2029, 2.0850, 3.6348], device='cuda:3'), covar=tensor([0.0493, 0.0767, 0.1547, 0.0420, 0.0818, 0.0323, 0.1337, 0.0531], device='cuda:3'), in_proj_covar=tensor([0.0203, 0.0221, 0.0184, 0.0261, 0.0186, 0.0256, 0.0198, 0.0193], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 06:03:24,919 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=49827.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:03:32,942 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.50 vs. limit=5.0 2023-03-09 06:03:36,390 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5690, 5.5271, 5.1333, 5.4598, 5.4605, 4.7708, 5.3587, 5.0657], device='cuda:3'), covar=tensor([0.0387, 0.0400, 0.1365, 0.0843, 0.0632, 0.0416, 0.0389, 0.1112], device='cuda:3'), in_proj_covar=tensor([0.0440, 0.0499, 0.0658, 0.0394, 0.0385, 0.0452, 0.0477, 0.0629], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 06:03:38,656 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7036, 5.3260, 5.2205, 5.2448, 4.7278, 5.1257, 4.5589, 5.1098], device='cuda:3'), covar=tensor([0.0246, 0.0261, 0.0206, 0.0356, 0.0403, 0.0207, 0.1137, 0.0312], device='cuda:3'), in_proj_covar=tensor([0.0192, 0.0237, 0.0228, 0.0279, 0.0242, 0.0240, 0.0293, 0.0231], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 06:03:44,069 INFO [train.py:898] (3/4) Epoch 14, batch 2600, loss[loss=0.1692, simple_loss=0.2603, pruned_loss=0.03906, over 18314.00 frames. ], tot_loss[loss=0.1795, simple_loss=0.2664, pruned_loss=0.04625, over 3578704.97 frames. ], batch size: 54, lr: 8.05e-03, grad_scale: 8.0 2023-03-09 06:03:59,916 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.176e+02 2.916e+02 3.456e+02 4.257e+02 7.547e+02, threshold=6.912e+02, percent-clipped=3.0 2023-03-09 06:04:14,107 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-09 06:04:42,291 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=49892.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:04:43,078 INFO [train.py:898] (3/4) Epoch 14, batch 2650, loss[loss=0.1952, simple_loss=0.2777, pruned_loss=0.05639, over 16021.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2658, pruned_loss=0.04608, over 3580644.90 frames. ], batch size: 95, lr: 8.05e-03, grad_scale: 8.0 2023-03-09 06:05:04,375 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.96 vs. limit=2.0 2023-03-09 06:05:38,381 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=49940.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:05:42,181 INFO [train.py:898] (3/4) Epoch 14, batch 2700, loss[loss=0.1852, simple_loss=0.2752, pruned_loss=0.04764, over 17054.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2658, pruned_loss=0.04606, over 3580583.60 frames. ], batch size: 78, lr: 8.05e-03, grad_scale: 8.0 2023-03-09 06:05:47,133 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4138, 5.9263, 5.4773, 5.6708, 5.4695, 5.4903, 5.9544, 5.8712], device='cuda:3'), covar=tensor([0.1226, 0.0785, 0.0471, 0.0679, 0.1358, 0.0679, 0.0570, 0.0741], device='cuda:3'), in_proj_covar=tensor([0.0558, 0.0468, 0.0348, 0.0503, 0.0681, 0.0501, 0.0659, 0.0496], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 06:05:47,719 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-09 06:05:57,464 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.140e+02 2.972e+02 3.465e+02 4.147e+02 6.859e+02, threshold=6.931e+02, percent-clipped=0.0 2023-03-09 06:06:40,849 INFO [train.py:898] (3/4) Epoch 14, batch 2750, loss[loss=0.1557, simple_loss=0.2357, pruned_loss=0.03785, over 18431.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.266, pruned_loss=0.04586, over 3590162.12 frames. ], batch size: 43, lr: 8.04e-03, grad_scale: 8.0 2023-03-09 06:07:13,142 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5325, 2.8475, 2.5363, 2.9400, 3.6243, 3.6424, 3.2365, 2.9963], device='cuda:3'), covar=tensor([0.0281, 0.0359, 0.0631, 0.0368, 0.0217, 0.0167, 0.0336, 0.0352], device='cuda:3'), in_proj_covar=tensor([0.0125, 0.0120, 0.0155, 0.0146, 0.0113, 0.0099, 0.0139, 0.0140], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 06:07:43,617 INFO [train.py:898] (3/4) Epoch 14, batch 2800, loss[loss=0.1632, simple_loss=0.2394, pruned_loss=0.04348, over 18435.00 frames. ], tot_loss[loss=0.1797, simple_loss=0.2667, pruned_loss=0.0464, over 3590155.41 frames. ], batch size: 43, lr: 8.04e-03, grad_scale: 16.0 2023-03-09 06:07:58,949 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.149e+02 3.000e+02 3.519e+02 4.137e+02 9.656e+02, threshold=7.037e+02, percent-clipped=4.0 2023-03-09 06:08:25,602 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=50078.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:08:37,066 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=50088.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 06:08:42,939 INFO [train.py:898] (3/4) Epoch 14, batch 2850, loss[loss=0.1823, simple_loss=0.2811, pruned_loss=0.04175, over 18357.00 frames. ], tot_loss[loss=0.1797, simple_loss=0.2666, pruned_loss=0.04638, over 3598010.08 frames. ], batch size: 55, lr: 8.03e-03, grad_scale: 16.0 2023-03-09 06:09:17,693 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=50122.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:09:22,113 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=50126.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:09:24,535 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=50128.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:09:33,405 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=50136.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 06:09:40,914 INFO [train.py:898] (3/4) Epoch 14, batch 2900, loss[loss=0.1929, simple_loss=0.2818, pruned_loss=0.052, over 17912.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.2657, pruned_loss=0.04599, over 3596263.73 frames. ], batch size: 65, lr: 8.03e-03, grad_scale: 16.0 2023-03-09 06:09:56,508 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.084e+02 3.147e+02 3.827e+02 4.513e+02 7.864e+02, threshold=7.653e+02, percent-clipped=1.0 2023-03-09 06:10:34,488 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=50189.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:10:38,564 INFO [train.py:898] (3/4) Epoch 14, batch 2950, loss[loss=0.1709, simple_loss=0.2635, pruned_loss=0.03912, over 18358.00 frames. ], tot_loss[loss=0.179, simple_loss=0.266, pruned_loss=0.046, over 3594608.30 frames. ], batch size: 55, lr: 8.03e-03, grad_scale: 16.0 2023-03-09 06:10:43,769 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=50197.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 06:10:46,627 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-09 06:10:48,897 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9438, 3.8297, 4.9560, 2.9260, 4.3383, 2.6389, 3.0532, 1.8836], device='cuda:3'), covar=tensor([0.0982, 0.0774, 0.0123, 0.0749, 0.0539, 0.2401, 0.2481, 0.1935], device='cuda:3'), in_proj_covar=tensor([0.0209, 0.0228, 0.0140, 0.0182, 0.0240, 0.0259, 0.0303, 0.0221], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 06:11:03,994 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3892, 2.6183, 3.8662, 3.5296, 2.4694, 4.1795, 3.6886, 2.7166], device='cuda:3'), covar=tensor([0.0564, 0.1599, 0.0321, 0.0370, 0.1567, 0.0239, 0.0610, 0.1079], device='cuda:3'), in_proj_covar=tensor([0.0201, 0.0232, 0.0176, 0.0150, 0.0217, 0.0196, 0.0224, 0.0193], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 06:11:13,065 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3777, 3.1714, 1.7946, 4.1717, 2.9252, 4.1133, 2.2083, 3.7779], device='cuda:3'), covar=tensor([0.0680, 0.0937, 0.1638, 0.0506, 0.0934, 0.0288, 0.1291, 0.0407], device='cuda:3'), in_proj_covar=tensor([0.0203, 0.0220, 0.0185, 0.0261, 0.0187, 0.0255, 0.0197, 0.0193], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 06:11:13,277 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.77 vs. limit=5.0 2023-03-09 06:11:37,784 INFO [train.py:898] (3/4) Epoch 14, batch 3000, loss[loss=0.1563, simple_loss=0.2369, pruned_loss=0.03788, over 18178.00 frames. ], tot_loss[loss=0.1797, simple_loss=0.2668, pruned_loss=0.04628, over 3588198.18 frames. ], batch size: 44, lr: 8.02e-03, grad_scale: 16.0 2023-03-09 06:11:37,785 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 06:11:49,728 INFO [train.py:932] (3/4) Epoch 14, validation: loss=0.1532, simple_loss=0.2546, pruned_loss=0.02587, over 944034.00 frames. 2023-03-09 06:11:49,729 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 06:12:04,123 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.273e+02 3.285e+02 3.966e+02 4.720e+02 1.017e+03, threshold=7.933e+02, percent-clipped=5.0 2023-03-09 06:12:07,464 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=50258.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 06:12:17,929 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-09 06:12:46,942 INFO [train.py:898] (3/4) Epoch 14, batch 3050, loss[loss=0.2018, simple_loss=0.2868, pruned_loss=0.05843, over 18388.00 frames. ], tot_loss[loss=0.1789, simple_loss=0.2658, pruned_loss=0.04603, over 3589026.51 frames. ], batch size: 52, lr: 8.02e-03, grad_scale: 16.0 2023-03-09 06:12:50,094 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4061, 2.8808, 2.2088, 2.9280, 3.5647, 3.5371, 3.0676, 2.9267], device='cuda:3'), covar=tensor([0.0232, 0.0270, 0.0622, 0.0303, 0.0195, 0.0152, 0.0325, 0.0339], device='cuda:3'), in_proj_covar=tensor([0.0126, 0.0120, 0.0156, 0.0146, 0.0113, 0.0100, 0.0140, 0.0141], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 06:13:20,666 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8022, 3.7025, 3.5346, 3.1776, 3.4752, 2.8158, 2.8569, 3.7802], device='cuda:3'), covar=tensor([0.0041, 0.0080, 0.0069, 0.0119, 0.0071, 0.0162, 0.0157, 0.0043], device='cuda:3'), in_proj_covar=tensor([0.0113, 0.0135, 0.0117, 0.0169, 0.0121, 0.0165, 0.0166, 0.0098], device='cuda:3'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 06:13:32,232 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.52 vs. limit=5.0 2023-03-09 06:13:45,663 INFO [train.py:898] (3/4) Epoch 14, batch 3100, loss[loss=0.1518, simple_loss=0.2356, pruned_loss=0.03399, over 18484.00 frames. ], tot_loss[loss=0.1783, simple_loss=0.2652, pruned_loss=0.04569, over 3575435.65 frames. ], batch size: 44, lr: 8.01e-03, grad_scale: 16.0 2023-03-09 06:14:00,590 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.982e+02 2.722e+02 3.409e+02 4.335e+02 9.164e+02, threshold=6.818e+02, percent-clipped=2.0 2023-03-09 06:14:42,807 INFO [train.py:898] (3/4) Epoch 14, batch 3150, loss[loss=0.2277, simple_loss=0.2988, pruned_loss=0.07832, over 13217.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.2641, pruned_loss=0.04532, over 3581618.03 frames. ], batch size: 130, lr: 8.01e-03, grad_scale: 16.0 2023-03-09 06:15:18,828 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=50422.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:15:42,275 INFO [train.py:898] (3/4) Epoch 14, batch 3200, loss[loss=0.179, simple_loss=0.2701, pruned_loss=0.04394, over 18407.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2651, pruned_loss=0.04547, over 3577079.61 frames. ], batch size: 52, lr: 8.01e-03, grad_scale: 16.0 2023-03-09 06:15:58,903 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.049e+02 3.101e+02 3.675e+02 4.644e+02 1.158e+03, threshold=7.350e+02, percent-clipped=4.0 2023-03-09 06:16:14,168 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=50470.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:16:30,666 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=50484.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:16:40,182 INFO [train.py:898] (3/4) Epoch 14, batch 3250, loss[loss=0.1751, simple_loss=0.2641, pruned_loss=0.04308, over 18247.00 frames. ], tot_loss[loss=0.1789, simple_loss=0.2657, pruned_loss=0.0461, over 3564859.29 frames. ], batch size: 60, lr: 8.00e-03, grad_scale: 8.0 2023-03-09 06:17:12,344 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3223, 3.1745, 1.8990, 4.1189, 2.8493, 3.9941, 2.1624, 3.6036], device='cuda:3'), covar=tensor([0.0577, 0.0801, 0.1389, 0.0432, 0.0831, 0.0288, 0.1230, 0.0413], device='cuda:3'), in_proj_covar=tensor([0.0202, 0.0218, 0.0183, 0.0261, 0.0185, 0.0255, 0.0196, 0.0193], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 06:17:32,954 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-09 06:17:39,139 INFO [train.py:898] (3/4) Epoch 14, batch 3300, loss[loss=0.1835, simple_loss=0.2661, pruned_loss=0.05043, over 18285.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.2651, pruned_loss=0.04559, over 3564789.52 frames. ], batch size: 60, lr: 8.00e-03, grad_scale: 8.0 2023-03-09 06:17:42,660 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=50546.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:17:50,972 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=50553.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 06:17:55,810 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.637e+02 3.010e+02 3.418e+02 4.071e+02 6.649e+02, threshold=6.837e+02, percent-clipped=0.0 2023-03-09 06:18:37,883 INFO [train.py:898] (3/4) Epoch 14, batch 3350, loss[loss=0.1704, simple_loss=0.2488, pruned_loss=0.04603, over 18184.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2654, pruned_loss=0.04565, over 3567666.17 frames. ], batch size: 44, lr: 8.00e-03, grad_scale: 8.0 2023-03-09 06:18:55,300 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=50607.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:19:12,515 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7208, 2.2996, 2.6207, 2.8164, 3.5445, 5.2501, 4.8565, 3.7362], device='cuda:3'), covar=tensor([0.1453, 0.2178, 0.2553, 0.1504, 0.1835, 0.0125, 0.0318, 0.0678], device='cuda:3'), in_proj_covar=tensor([0.0265, 0.0320, 0.0343, 0.0260, 0.0370, 0.0210, 0.0276, 0.0226], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 06:19:36,703 INFO [train.py:898] (3/4) Epoch 14, batch 3400, loss[loss=0.1899, simple_loss=0.274, pruned_loss=0.05284, over 18328.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.2654, pruned_loss=0.04608, over 3548468.97 frames. ], batch size: 56, lr: 7.99e-03, grad_scale: 8.0 2023-03-09 06:19:53,295 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.118e+02 2.885e+02 3.458e+02 4.304e+02 7.306e+02, threshold=6.916e+02, percent-clipped=1.0 2023-03-09 06:20:17,081 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6649, 3.0070, 4.2390, 3.7356, 2.5820, 4.5580, 3.9497, 2.7297], device='cuda:3'), covar=tensor([0.0484, 0.1292, 0.0270, 0.0315, 0.1503, 0.0190, 0.0486, 0.1073], device='cuda:3'), in_proj_covar=tensor([0.0204, 0.0232, 0.0179, 0.0151, 0.0219, 0.0199, 0.0226, 0.0197], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 06:20:17,102 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5099, 3.2484, 1.8969, 4.2536, 2.9884, 4.2026, 2.1016, 3.8243], device='cuda:3'), covar=tensor([0.0553, 0.0937, 0.1553, 0.0481, 0.0837, 0.0269, 0.1292, 0.0402], device='cuda:3'), in_proj_covar=tensor([0.0203, 0.0218, 0.0183, 0.0261, 0.0186, 0.0253, 0.0195, 0.0193], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 06:20:35,021 INFO [train.py:898] (3/4) Epoch 14, batch 3450, loss[loss=0.1628, simple_loss=0.2502, pruned_loss=0.03768, over 18354.00 frames. ], tot_loss[loss=0.1783, simple_loss=0.2647, pruned_loss=0.04591, over 3558474.01 frames. ], batch size: 46, lr: 7.99e-03, grad_scale: 8.0 2023-03-09 06:20:42,151 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5033, 6.1154, 5.6323, 5.8727, 5.6907, 5.5730, 6.1542, 6.1302], device='cuda:3'), covar=tensor([0.1125, 0.0584, 0.0397, 0.0663, 0.1257, 0.0753, 0.0513, 0.0552], device='cuda:3'), in_proj_covar=tensor([0.0550, 0.0464, 0.0346, 0.0496, 0.0679, 0.0499, 0.0664, 0.0493], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 06:21:18,223 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.3902, 2.0187, 2.3251, 2.5566, 2.7386, 4.8805, 4.5599, 3.5277], device='cuda:3'), covar=tensor([0.1891, 0.2989, 0.3548, 0.1899, 0.3344, 0.0227, 0.0460, 0.0799], device='cuda:3'), in_proj_covar=tensor([0.0267, 0.0322, 0.0344, 0.0260, 0.0372, 0.0211, 0.0278, 0.0226], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 06:21:32,281 INFO [train.py:898] (3/4) Epoch 14, batch 3500, loss[loss=0.1708, simple_loss=0.2599, pruned_loss=0.0409, over 17962.00 frames. ], tot_loss[loss=0.1778, simple_loss=0.264, pruned_loss=0.04579, over 3563000.71 frames. ], batch size: 65, lr: 7.98e-03, grad_scale: 8.0 2023-03-09 06:21:34,961 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1342, 5.4476, 2.8575, 5.2300, 5.1712, 5.4961, 5.2577, 3.0111], device='cuda:3'), covar=tensor([0.0172, 0.0056, 0.0716, 0.0075, 0.0063, 0.0052, 0.0087, 0.0831], device='cuda:3'), in_proj_covar=tensor([0.0081, 0.0072, 0.0090, 0.0087, 0.0080, 0.0069, 0.0079, 0.0094], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 06:21:47,960 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.084e+02 3.008e+02 3.421e+02 4.567e+02 7.789e+02, threshold=6.843e+02, percent-clipped=2.0 2023-03-09 06:22:15,549 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=50782.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:22:17,535 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=50784.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:22:26,426 INFO [train.py:898] (3/4) Epoch 14, batch 3550, loss[loss=0.2546, simple_loss=0.3262, pruned_loss=0.09144, over 12580.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.2645, pruned_loss=0.04637, over 3542344.35 frames. ], batch size: 130, lr: 7.98e-03, grad_scale: 8.0 2023-03-09 06:22:43,258 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=50808.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 06:23:08,090 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=50832.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:23:19,508 INFO [train.py:898] (3/4) Epoch 14, batch 3600, loss[loss=0.17, simple_loss=0.2584, pruned_loss=0.04081, over 18139.00 frames. ], tot_loss[loss=0.1793, simple_loss=0.2655, pruned_loss=0.04656, over 3540928.94 frames. ], batch size: 44, lr: 7.98e-03, grad_scale: 8.0 2023-03-09 06:23:19,884 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=50843.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:23:30,333 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=50853.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 06:23:34,501 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.994e+02 3.100e+02 3.552e+02 4.596e+02 8.414e+02, threshold=7.104e+02, percent-clipped=7.0 2023-03-09 06:23:47,688 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=50869.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 06:24:25,705 INFO [train.py:898] (3/4) Epoch 15, batch 0, loss[loss=0.1684, simple_loss=0.2537, pruned_loss=0.04154, over 18282.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2537, pruned_loss=0.04154, over 18282.00 frames. ], batch size: 45, lr: 7.70e-03, grad_scale: 8.0 2023-03-09 06:24:25,706 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 06:24:37,511 INFO [train.py:932] (3/4) Epoch 15, validation: loss=0.1543, simple_loss=0.2557, pruned_loss=0.02649, over 944034.00 frames. 2023-03-09 06:24:37,511 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 06:24:45,229 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-09 06:25:06,589 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=50901.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 06:25:07,638 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=50902.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:25:17,780 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1423, 2.5524, 3.2157, 3.0899, 2.4436, 3.4073, 3.2284, 2.5587], device='cuda:3'), covar=tensor([0.0426, 0.1159, 0.0348, 0.0329, 0.1275, 0.0259, 0.0587, 0.0839], device='cuda:3'), in_proj_covar=tensor([0.0199, 0.0228, 0.0177, 0.0149, 0.0215, 0.0195, 0.0221, 0.0192], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 06:25:35,410 INFO [train.py:898] (3/4) Epoch 15, batch 50, loss[loss=0.1764, simple_loss=0.2662, pruned_loss=0.04326, over 18105.00 frames. ], tot_loss[loss=0.1804, simple_loss=0.267, pruned_loss=0.0469, over 811454.34 frames. ], batch size: 62, lr: 7.70e-03, grad_scale: 8.0 2023-03-09 06:25:58,275 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=50946.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:26:11,146 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.078e+02 2.874e+02 3.277e+02 4.282e+02 1.387e+03, threshold=6.554e+02, percent-clipped=5.0 2023-03-09 06:26:27,460 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5223, 2.9876, 4.2452, 3.5739, 2.9267, 4.6012, 3.8949, 2.8270], device='cuda:3'), covar=tensor([0.0578, 0.1302, 0.0249, 0.0410, 0.1270, 0.0186, 0.0470, 0.1008], device='cuda:3'), in_proj_covar=tensor([0.0200, 0.0230, 0.0178, 0.0150, 0.0216, 0.0196, 0.0222, 0.0194], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 06:26:34,008 INFO [train.py:898] (3/4) Epoch 15, batch 100, loss[loss=0.1876, simple_loss=0.2774, pruned_loss=0.04885, over 17211.00 frames. ], tot_loss[loss=0.1795, simple_loss=0.2664, pruned_loss=0.04636, over 1434346.78 frames. ], batch size: 78, lr: 7.69e-03, grad_scale: 8.0 2023-03-09 06:27:10,732 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51007.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:27:33,101 INFO [train.py:898] (3/4) Epoch 15, batch 150, loss[loss=0.1804, simple_loss=0.272, pruned_loss=0.04439, over 18647.00 frames. ], tot_loss[loss=0.1777, simple_loss=0.265, pruned_loss=0.04519, over 1926516.29 frames. ], batch size: 52, lr: 7.69e-03, grad_scale: 8.0 2023-03-09 06:28:07,488 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3538, 5.3134, 4.8751, 5.2953, 5.2726, 4.5848, 5.1687, 4.9084], device='cuda:3'), covar=tensor([0.0406, 0.0421, 0.1398, 0.0755, 0.0550, 0.0456, 0.0389, 0.1066], device='cuda:3'), in_proj_covar=tensor([0.0437, 0.0496, 0.0652, 0.0389, 0.0388, 0.0450, 0.0478, 0.0619], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 06:28:09,469 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.782e+02 2.876e+02 3.303e+02 4.141e+02 9.283e+02, threshold=6.606e+02, percent-clipped=4.0 2023-03-09 06:28:15,427 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=51062.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:28:31,739 INFO [train.py:898] (3/4) Epoch 15, batch 200, loss[loss=0.1881, simple_loss=0.2784, pruned_loss=0.04887, over 18236.00 frames. ], tot_loss[loss=0.177, simple_loss=0.2641, pruned_loss=0.04498, over 2311133.78 frames. ], batch size: 60, lr: 7.69e-03, grad_scale: 8.0 2023-03-09 06:28:44,358 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.3596, 4.1667, 4.3918, 4.1065, 4.1419, 4.2985, 4.4723, 4.3988], device='cuda:3'), covar=tensor([0.0121, 0.0137, 0.0103, 0.0147, 0.0116, 0.0142, 0.0112, 0.0156], device='cuda:3'), in_proj_covar=tensor([0.0086, 0.0062, 0.0065, 0.0084, 0.0068, 0.0093, 0.0079, 0.0079], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:3') 2023-03-09 06:29:25,617 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51123.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:29:29,637 INFO [train.py:898] (3/4) Epoch 15, batch 250, loss[loss=0.1845, simple_loss=0.2747, pruned_loss=0.04717, over 18504.00 frames. ], tot_loss[loss=0.1779, simple_loss=0.265, pruned_loss=0.04539, over 2591525.51 frames. ], batch size: 53, lr: 7.68e-03, grad_scale: 8.0 2023-03-09 06:29:41,523 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51138.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:30:03,162 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.733e+02 3.055e+02 3.759e+02 4.768e+02 8.337e+02, threshold=7.517e+02, percent-clipped=9.0 2023-03-09 06:30:12,416 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51164.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 06:30:21,615 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=51172.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 06:30:27,049 INFO [train.py:898] (3/4) Epoch 15, batch 300, loss[loss=0.1708, simple_loss=0.2599, pruned_loss=0.04082, over 16233.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.265, pruned_loss=0.04555, over 2808655.62 frames. ], batch size: 94, lr: 7.68e-03, grad_scale: 8.0 2023-03-09 06:30:56,870 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=51202.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:31:19,141 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-09 06:31:26,184 INFO [train.py:898] (3/4) Epoch 15, batch 350, loss[loss=0.1625, simple_loss=0.2419, pruned_loss=0.04151, over 18257.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.263, pruned_loss=0.04506, over 2982634.61 frames. ], batch size: 45, lr: 7.67e-03, grad_scale: 8.0 2023-03-09 06:31:27,838 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8091, 4.1394, 2.2665, 3.9646, 5.0250, 2.5464, 3.3628, 3.5365], device='cuda:3'), covar=tensor([0.0140, 0.1045, 0.1693, 0.0606, 0.0073, 0.1253, 0.0932, 0.0974], device='cuda:3'), in_proj_covar=tensor([0.0136, 0.0247, 0.0194, 0.0191, 0.0100, 0.0178, 0.0207, 0.0214], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 06:31:29,219 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.85 vs. limit=2.0 2023-03-09 06:31:33,505 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51233.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 06:31:42,282 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0212, 5.0788, 2.5826, 4.9689, 4.8631, 5.1445, 4.9174, 2.6355], device='cuda:3'), covar=tensor([0.0192, 0.0064, 0.0777, 0.0097, 0.0068, 0.0065, 0.0100, 0.0984], device='cuda:3'), in_proj_covar=tensor([0.0082, 0.0073, 0.0091, 0.0088, 0.0081, 0.0069, 0.0080, 0.0094], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 06:31:53,049 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=51250.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:32:01,197 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.843e+02 2.755e+02 3.363e+02 4.160e+02 1.249e+03, threshold=6.726e+02, percent-clipped=1.0 2023-03-09 06:32:20,989 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1481, 4.3679, 2.4117, 4.2462, 5.3345, 2.7466, 3.8249, 3.8744], device='cuda:3'), covar=tensor([0.0102, 0.1004, 0.1647, 0.0544, 0.0055, 0.1245, 0.0691, 0.0749], device='cuda:3'), in_proj_covar=tensor([0.0136, 0.0247, 0.0194, 0.0191, 0.0100, 0.0177, 0.0208, 0.0214], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 06:32:25,076 INFO [train.py:898] (3/4) Epoch 15, batch 400, loss[loss=0.159, simple_loss=0.2502, pruned_loss=0.03386, over 18276.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2618, pruned_loss=0.04408, over 3133228.20 frames. ], batch size: 47, lr: 7.67e-03, grad_scale: 8.0 2023-03-09 06:32:52,614 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1119, 2.5940, 2.2226, 2.5426, 3.3632, 3.2147, 2.8793, 2.7109], device='cuda:3'), covar=tensor([0.0204, 0.0281, 0.0656, 0.0394, 0.0167, 0.0140, 0.0373, 0.0373], device='cuda:3'), in_proj_covar=tensor([0.0128, 0.0123, 0.0156, 0.0149, 0.0113, 0.0100, 0.0140, 0.0141], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 06:32:53,540 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51302.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:33:15,461 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8065, 5.3307, 5.3276, 5.3558, 4.8681, 5.2227, 4.6827, 5.2604], device='cuda:3'), covar=tensor([0.0243, 0.0255, 0.0201, 0.0346, 0.0355, 0.0237, 0.1063, 0.0263], device='cuda:3'), in_proj_covar=tensor([0.0193, 0.0239, 0.0234, 0.0283, 0.0244, 0.0242, 0.0294, 0.0230], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 06:33:23,602 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5922, 3.6292, 5.1446, 4.3934, 3.1922, 2.8874, 4.4872, 5.2163], device='cuda:3'), covar=tensor([0.0840, 0.1681, 0.0112, 0.0359, 0.0968, 0.1222, 0.0332, 0.0182], device='cuda:3'), in_proj_covar=tensor([0.0142, 0.0258, 0.0123, 0.0172, 0.0184, 0.0184, 0.0185, 0.0169], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 06:33:24,233 INFO [train.py:898] (3/4) Epoch 15, batch 450, loss[loss=0.1705, simple_loss=0.2621, pruned_loss=0.0395, over 18390.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2615, pruned_loss=0.04371, over 3236423.93 frames. ], batch size: 52, lr: 7.67e-03, grad_scale: 8.0 2023-03-09 06:33:30,048 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3461, 5.3116, 4.9144, 5.2012, 5.2357, 4.6355, 5.1593, 4.9096], device='cuda:3'), covar=tensor([0.0429, 0.0494, 0.1412, 0.0894, 0.0573, 0.0435, 0.0405, 0.1109], device='cuda:3'), in_proj_covar=tensor([0.0447, 0.0509, 0.0666, 0.0398, 0.0395, 0.0460, 0.0487, 0.0632], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 06:33:59,192 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.915e+02 2.968e+02 3.356e+02 4.062e+02 8.839e+02, threshold=6.712e+02, percent-clipped=1.0 2023-03-09 06:34:23,279 INFO [train.py:898] (3/4) Epoch 15, batch 500, loss[loss=0.1685, simple_loss=0.254, pruned_loss=0.04149, over 18296.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2623, pruned_loss=0.04419, over 3314030.09 frames. ], batch size: 49, lr: 7.66e-03, grad_scale: 8.0 2023-03-09 06:34:28,997 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=51382.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:34:40,434 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=51392.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:35:10,681 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51418.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:35:20,790 INFO [train.py:898] (3/4) Epoch 15, batch 550, loss[loss=0.1908, simple_loss=0.2793, pruned_loss=0.0511, over 18310.00 frames. ], tot_loss[loss=0.1768, simple_loss=0.2635, pruned_loss=0.04505, over 3368175.94 frames. ], batch size: 57, lr: 7.66e-03, grad_scale: 8.0 2023-03-09 06:35:33,812 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=51438.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:35:39,562 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51443.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:35:41,816 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6805, 4.3580, 4.2953, 3.1666, 3.6696, 3.3817, 2.2871, 2.3391], device='cuda:3'), covar=tensor([0.0238, 0.0157, 0.0110, 0.0349, 0.0342, 0.0212, 0.0850, 0.0932], device='cuda:3'), in_proj_covar=tensor([0.0065, 0.0053, 0.0056, 0.0065, 0.0085, 0.0061, 0.0075, 0.0083], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 06:35:50,554 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8802, 4.6272, 4.5736, 3.6505, 3.9519, 3.6683, 2.6124, 2.7830], device='cuda:3'), covar=tensor([0.0223, 0.0134, 0.0091, 0.0244, 0.0291, 0.0171, 0.0717, 0.0772], device='cuda:3'), in_proj_covar=tensor([0.0065, 0.0053, 0.0056, 0.0065, 0.0085, 0.0061, 0.0075, 0.0083], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 06:35:50,602 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51453.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:35:54,566 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.873e+02 3.112e+02 3.809e+02 4.675e+02 7.627e+02, threshold=7.618e+02, percent-clipped=1.0 2023-03-09 06:35:58,936 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-09 06:36:01,326 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8581, 4.1525, 2.4491, 3.9477, 5.0836, 2.3651, 3.6186, 3.8965], device='cuda:3'), covar=tensor([0.0145, 0.1035, 0.1786, 0.0644, 0.0064, 0.1562, 0.0788, 0.0677], device='cuda:3'), in_proj_covar=tensor([0.0137, 0.0250, 0.0196, 0.0192, 0.0101, 0.0179, 0.0210, 0.0216], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 06:36:03,380 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=51464.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 06:36:18,186 INFO [train.py:898] (3/4) Epoch 15, batch 600, loss[loss=0.196, simple_loss=0.285, pruned_loss=0.05347, over 15956.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2633, pruned_loss=0.0449, over 3421370.18 frames. ], batch size: 94, lr: 7.66e-03, grad_scale: 8.0 2023-03-09 06:36:29,451 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=51486.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:36:39,869 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=51495.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:36:58,953 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=51512.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 06:37:03,543 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.74 vs. limit=5.0 2023-03-09 06:37:16,748 INFO [train.py:898] (3/4) Epoch 15, batch 650, loss[loss=0.1888, simple_loss=0.2811, pruned_loss=0.04831, over 18291.00 frames. ], tot_loss[loss=0.1755, simple_loss=0.2623, pruned_loss=0.04428, over 3471531.08 frames. ], batch size: 57, lr: 7.65e-03, grad_scale: 8.0 2023-03-09 06:37:18,098 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51528.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 06:37:47,625 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4062, 5.2571, 5.6804, 5.6241, 5.2814, 6.2243, 5.8579, 5.4606], device='cuda:3'), covar=tensor([0.0961, 0.0578, 0.0642, 0.0599, 0.1505, 0.0681, 0.0589, 0.1750], device='cuda:3'), in_proj_covar=tensor([0.0336, 0.0261, 0.0277, 0.0277, 0.0313, 0.0390, 0.0255, 0.0376], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0004, 0.0002, 0.0003], device='cuda:3') 2023-03-09 06:37:51,168 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51556.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:37:51,901 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.157e+02 2.845e+02 3.389e+02 4.059e+02 1.145e+03, threshold=6.778e+02, percent-clipped=5.0 2023-03-09 06:38:15,818 INFO [train.py:898] (3/4) Epoch 15, batch 700, loss[loss=0.1576, simple_loss=0.2413, pruned_loss=0.03691, over 18504.00 frames. ], tot_loss[loss=0.1761, simple_loss=0.2631, pruned_loss=0.04462, over 3489337.62 frames. ], batch size: 44, lr: 7.65e-03, grad_scale: 8.0 2023-03-09 06:38:46,088 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=51602.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:39:14,428 INFO [train.py:898] (3/4) Epoch 15, batch 750, loss[loss=0.1498, simple_loss=0.2354, pruned_loss=0.03208, over 18418.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2616, pruned_loss=0.04398, over 3513762.97 frames. ], batch size: 43, lr: 7.65e-03, grad_scale: 8.0 2023-03-09 06:39:14,872 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7117, 3.6174, 3.4567, 3.0441, 3.4442, 2.7843, 2.7131, 3.6562], device='cuda:3'), covar=tensor([0.0047, 0.0076, 0.0067, 0.0118, 0.0076, 0.0163, 0.0174, 0.0046], device='cuda:3'), in_proj_covar=tensor([0.0118, 0.0139, 0.0120, 0.0174, 0.0124, 0.0167, 0.0170, 0.0100], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 06:39:42,125 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=51650.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:39:49,769 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.857e+02 2.830e+02 3.241e+02 3.850e+02 9.423e+02, threshold=6.481e+02, percent-clipped=1.0 2023-03-09 06:39:59,030 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5232, 5.5261, 4.9683, 5.4371, 5.4988, 4.8530, 5.3946, 5.0878], device='cuda:3'), covar=tensor([0.0579, 0.0432, 0.1721, 0.0949, 0.0620, 0.0520, 0.0506, 0.1034], device='cuda:3'), in_proj_covar=tensor([0.0438, 0.0497, 0.0653, 0.0389, 0.0385, 0.0453, 0.0482, 0.0619], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 06:40:12,983 INFO [train.py:898] (3/4) Epoch 15, batch 800, loss[loss=0.1716, simple_loss=0.2697, pruned_loss=0.03677, over 18302.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2618, pruned_loss=0.04379, over 3528791.05 frames. ], batch size: 54, lr: 7.64e-03, grad_scale: 8.0 2023-03-09 06:40:26,506 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9595, 4.9186, 5.0648, 4.7935, 4.7272, 4.8061, 5.1429, 5.1879], device='cuda:3'), covar=tensor([0.0065, 0.0070, 0.0061, 0.0106, 0.0068, 0.0136, 0.0067, 0.0085], device='cuda:3'), in_proj_covar=tensor([0.0086, 0.0062, 0.0065, 0.0083, 0.0068, 0.0093, 0.0079, 0.0079], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:3') 2023-03-09 06:40:45,737 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3732, 5.3526, 4.9063, 5.2519, 5.2809, 4.6682, 5.1870, 4.9137], device='cuda:3'), covar=tensor([0.0392, 0.0422, 0.1285, 0.0784, 0.0536, 0.0399, 0.0373, 0.1045], device='cuda:3'), in_proj_covar=tensor([0.0441, 0.0501, 0.0657, 0.0391, 0.0387, 0.0456, 0.0483, 0.0623], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 06:41:01,520 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=51718.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:41:11,136 INFO [train.py:898] (3/4) Epoch 15, batch 850, loss[loss=0.2264, simple_loss=0.3048, pruned_loss=0.07399, over 12131.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.2625, pruned_loss=0.04434, over 3532927.97 frames. ], batch size: 129, lr: 7.64e-03, grad_scale: 8.0 2023-03-09 06:41:24,469 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51738.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:41:35,610 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=51747.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:41:36,621 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51748.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:41:47,043 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.547e+02 2.965e+02 3.491e+02 4.119e+02 9.035e+02, threshold=6.982e+02, percent-clipped=4.0 2023-03-09 06:41:57,167 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=51766.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:42:09,439 INFO [train.py:898] (3/4) Epoch 15, batch 900, loss[loss=0.172, simple_loss=0.2536, pruned_loss=0.0452, over 18262.00 frames. ], tot_loss[loss=0.1757, simple_loss=0.2626, pruned_loss=0.04443, over 3548436.17 frames. ], batch size: 47, lr: 7.63e-03, grad_scale: 8.0 2023-03-09 06:42:47,104 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51808.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:43:08,028 INFO [train.py:898] (3/4) Epoch 15, batch 950, loss[loss=0.1777, simple_loss=0.2612, pruned_loss=0.0471, over 18500.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2622, pruned_loss=0.04421, over 3557954.62 frames. ], batch size: 47, lr: 7.63e-03, grad_scale: 8.0 2023-03-09 06:43:09,475 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=51828.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 06:43:37,263 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51851.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:43:43,806 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.174e+02 2.846e+02 3.316e+02 4.013e+02 9.556e+02, threshold=6.632e+02, percent-clipped=3.0 2023-03-09 06:44:06,182 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=51876.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 06:44:06,907 INFO [train.py:898] (3/4) Epoch 15, batch 1000, loss[loss=0.2186, simple_loss=0.3082, pruned_loss=0.06449, over 18079.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2622, pruned_loss=0.04409, over 3572040.53 frames. ], batch size: 62, lr: 7.63e-03, grad_scale: 8.0 2023-03-09 06:44:41,403 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-09 06:44:46,396 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9395, 4.1819, 2.5444, 4.0904, 5.0926, 2.4685, 3.8187, 3.8997], device='cuda:3'), covar=tensor([0.0146, 0.1059, 0.1647, 0.0595, 0.0079, 0.1448, 0.0689, 0.0717], device='cuda:3'), in_proj_covar=tensor([0.0135, 0.0246, 0.0192, 0.0187, 0.0100, 0.0177, 0.0205, 0.0211], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 06:44:51,344 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4203, 2.4258, 3.9002, 3.5293, 2.5007, 4.1427, 3.6861, 2.7327], device='cuda:3'), covar=tensor([0.0501, 0.1576, 0.0298, 0.0342, 0.1456, 0.0199, 0.0522, 0.0901], device='cuda:3'), in_proj_covar=tensor([0.0200, 0.0229, 0.0179, 0.0150, 0.0217, 0.0194, 0.0224, 0.0192], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 06:45:05,512 INFO [train.py:898] (3/4) Epoch 15, batch 1050, loss[loss=0.1761, simple_loss=0.2652, pruned_loss=0.04348, over 18306.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.262, pruned_loss=0.0438, over 3587348.48 frames. ], batch size: 57, lr: 7.62e-03, grad_scale: 8.0 2023-03-09 06:45:21,098 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4747, 3.3635, 3.2929, 2.9422, 3.2619, 2.6243, 2.5689, 3.5042], device='cuda:3'), covar=tensor([0.0050, 0.0086, 0.0068, 0.0122, 0.0079, 0.0184, 0.0180, 0.0046], device='cuda:3'), in_proj_covar=tensor([0.0118, 0.0140, 0.0120, 0.0174, 0.0125, 0.0168, 0.0173, 0.0101], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 06:45:39,862 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.929e+02 2.891e+02 3.197e+02 3.857e+02 9.406e+02, threshold=6.393e+02, percent-clipped=2.0 2023-03-09 06:46:03,751 INFO [train.py:898] (3/4) Epoch 15, batch 1100, loss[loss=0.1449, simple_loss=0.2253, pruned_loss=0.03223, over 17725.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2623, pruned_loss=0.04402, over 3576998.63 frames. ], batch size: 39, lr: 7.62e-03, grad_scale: 8.0 2023-03-09 06:46:09,775 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1640, 5.5090, 2.8423, 5.3499, 5.2781, 5.5587, 5.2726, 2.8733], device='cuda:3'), covar=tensor([0.0176, 0.0050, 0.0739, 0.0069, 0.0058, 0.0054, 0.0090, 0.0894], device='cuda:3'), in_proj_covar=tensor([0.0081, 0.0072, 0.0091, 0.0087, 0.0081, 0.0069, 0.0080, 0.0093], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 06:46:53,228 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4035, 5.3623, 4.9731, 5.3122, 5.2838, 4.6831, 5.2365, 4.9462], device='cuda:3'), covar=tensor([0.0390, 0.0387, 0.1267, 0.0646, 0.0545, 0.0414, 0.0373, 0.0914], device='cuda:3'), in_proj_covar=tensor([0.0440, 0.0501, 0.0656, 0.0392, 0.0385, 0.0457, 0.0481, 0.0619], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 06:47:06,363 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.22 vs. limit=5.0 2023-03-09 06:47:06,919 INFO [train.py:898] (3/4) Epoch 15, batch 1150, loss[loss=0.1481, simple_loss=0.2329, pruned_loss=0.03166, over 18410.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2623, pruned_loss=0.04373, over 3588085.84 frames. ], batch size: 48, lr: 7.62e-03, grad_scale: 8.0 2023-03-09 06:47:19,960 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52038.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:47:22,369 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=52040.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:47:30,692 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2198, 5.3334, 4.4488, 5.1631, 5.2788, 4.6684, 5.0881, 4.7319], device='cuda:3'), covar=tensor([0.0790, 0.0615, 0.2525, 0.1105, 0.0703, 0.0605, 0.0732, 0.1273], device='cuda:3'), in_proj_covar=tensor([0.0444, 0.0504, 0.0661, 0.0395, 0.0387, 0.0460, 0.0485, 0.0623], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 06:47:31,720 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52048.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:47:42,028 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.912e+02 2.945e+02 3.374e+02 4.146e+02 9.750e+02, threshold=6.749e+02, percent-clipped=3.0 2023-03-09 06:48:05,575 INFO [train.py:898] (3/4) Epoch 15, batch 1200, loss[loss=0.1876, simple_loss=0.2756, pruned_loss=0.04979, over 16106.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2618, pruned_loss=0.04343, over 3588469.33 frames. ], batch size: 94, lr: 7.61e-03, grad_scale: 8.0 2023-03-09 06:48:15,941 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=52086.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:48:20,700 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7873, 3.6950, 4.9754, 2.7323, 4.3772, 2.5814, 3.1175, 1.8099], device='cuda:3'), covar=tensor([0.1030, 0.0844, 0.0116, 0.0816, 0.0519, 0.2329, 0.2386, 0.1892], device='cuda:3'), in_proj_covar=tensor([0.0207, 0.0224, 0.0142, 0.0181, 0.0238, 0.0256, 0.0298, 0.0219], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 06:48:27,255 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=52096.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:48:33,702 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=52101.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:48:36,136 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=52103.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:49:03,939 INFO [train.py:898] (3/4) Epoch 15, batch 1250, loss[loss=0.2347, simple_loss=0.3124, pruned_loss=0.07846, over 12362.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2618, pruned_loss=0.04336, over 3585297.17 frames. ], batch size: 130, lr: 7.61e-03, grad_scale: 8.0 2023-03-09 06:49:23,576 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9793, 4.5712, 4.7366, 3.5669, 3.9447, 3.6027, 2.6655, 2.5505], device='cuda:3'), covar=tensor([0.0177, 0.0136, 0.0068, 0.0287, 0.0267, 0.0194, 0.0735, 0.0838], device='cuda:3'), in_proj_covar=tensor([0.0063, 0.0052, 0.0054, 0.0063, 0.0083, 0.0059, 0.0074, 0.0079], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0004, 0.0006, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 06:49:31,569 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52151.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:49:38,915 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.755e+02 2.743e+02 3.530e+02 4.492e+02 1.221e+03, threshold=7.059e+02, percent-clipped=4.0 2023-03-09 06:50:02,589 INFO [train.py:898] (3/4) Epoch 15, batch 1300, loss[loss=0.1626, simple_loss=0.2562, pruned_loss=0.03451, over 18387.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2618, pruned_loss=0.0434, over 3581341.38 frames. ], batch size: 50, lr: 7.61e-03, grad_scale: 8.0 2023-03-09 06:50:21,032 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=52193.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:50:27,683 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=52199.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:51:00,952 INFO [train.py:898] (3/4) Epoch 15, batch 1350, loss[loss=0.1749, simple_loss=0.2653, pruned_loss=0.04223, over 18262.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2614, pruned_loss=0.04347, over 3590076.25 frames. ], batch size: 57, lr: 7.60e-03, grad_scale: 8.0 2023-03-09 06:51:32,227 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=52254.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:51:35,201 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.856e+02 2.890e+02 3.384e+02 4.121e+02 8.952e+02, threshold=6.768e+02, percent-clipped=2.0 2023-03-09 06:51:52,188 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0952, 4.6378, 4.8406, 3.7156, 3.9108, 3.6541, 2.8718, 2.8962], device='cuda:3'), covar=tensor([0.0154, 0.0145, 0.0048, 0.0228, 0.0264, 0.0192, 0.0608, 0.0637], device='cuda:3'), in_proj_covar=tensor([0.0062, 0.0052, 0.0053, 0.0062, 0.0082, 0.0059, 0.0072, 0.0078], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0004, 0.0006, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 06:51:59,005 INFO [train.py:898] (3/4) Epoch 15, batch 1400, loss[loss=0.174, simple_loss=0.2639, pruned_loss=0.04204, over 18396.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2607, pruned_loss=0.04328, over 3604108.81 frames. ], batch size: 52, lr: 7.60e-03, grad_scale: 8.0 2023-03-09 06:52:06,058 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3832, 5.1388, 5.5786, 5.5730, 5.2998, 6.1571, 5.8756, 5.4516], device='cuda:3'), covar=tensor([0.0948, 0.0679, 0.0756, 0.0783, 0.1372, 0.0701, 0.0544, 0.1620], device='cuda:3'), in_proj_covar=tensor([0.0343, 0.0267, 0.0283, 0.0285, 0.0319, 0.0398, 0.0261, 0.0387], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0002, 0.0004], device='cuda:3') 2023-03-09 06:52:40,500 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=52312.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:52:54,312 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8524, 3.6449, 4.9255, 4.2155, 2.9859, 2.7657, 4.0774, 5.0950], device='cuda:3'), covar=tensor([0.0779, 0.1509, 0.0146, 0.0402, 0.1092, 0.1273, 0.0476, 0.0196], device='cuda:3'), in_proj_covar=tensor([0.0141, 0.0258, 0.0123, 0.0172, 0.0184, 0.0182, 0.0183, 0.0170], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 06:52:57,735 INFO [train.py:898] (3/4) Epoch 15, batch 1450, loss[loss=0.176, simple_loss=0.2626, pruned_loss=0.04468, over 18366.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2609, pruned_loss=0.04343, over 3606242.34 frames. ], batch size: 56, lr: 7.59e-03, grad_scale: 8.0 2023-03-09 06:52:59,188 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7749, 5.2865, 4.9324, 5.0631, 4.9260, 4.8304, 5.3335, 5.2727], device='cuda:3'), covar=tensor([0.1217, 0.0698, 0.0971, 0.0652, 0.1357, 0.0686, 0.0612, 0.0665], device='cuda:3'), in_proj_covar=tensor([0.0567, 0.0476, 0.0354, 0.0504, 0.0693, 0.0502, 0.0676, 0.0505], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 06:53:31,866 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.869e+02 2.900e+02 3.474e+02 4.248e+02 1.297e+03, threshold=6.947e+02, percent-clipped=2.0 2023-03-09 06:53:39,046 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-09 06:53:51,243 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=52373.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:53:55,302 INFO [train.py:898] (3/4) Epoch 15, batch 1500, loss[loss=0.1565, simple_loss=0.2328, pruned_loss=0.04006, over 18157.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2608, pruned_loss=0.04352, over 3607574.21 frames. ], batch size: 44, lr: 7.59e-03, grad_scale: 8.0 2023-03-09 06:54:11,654 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9791, 4.5735, 4.6970, 3.5600, 3.7927, 3.7169, 2.7994, 2.4570], device='cuda:3'), covar=tensor([0.0169, 0.0132, 0.0064, 0.0294, 0.0291, 0.0178, 0.0679, 0.0865], device='cuda:3'), in_proj_covar=tensor([0.0062, 0.0051, 0.0053, 0.0063, 0.0083, 0.0059, 0.0073, 0.0079], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0004, 0.0006, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 06:54:18,301 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=52396.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:54:23,120 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9070, 4.4925, 4.6037, 3.4450, 3.7064, 3.5465, 2.6630, 2.4947], device='cuda:3'), covar=tensor([0.0192, 0.0149, 0.0071, 0.0298, 0.0321, 0.0223, 0.0727, 0.0820], device='cuda:3'), in_proj_covar=tensor([0.0062, 0.0052, 0.0053, 0.0063, 0.0083, 0.0059, 0.0073, 0.0079], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0004, 0.0006, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 06:54:26,326 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52403.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:54:54,576 INFO [train.py:898] (3/4) Epoch 15, batch 1550, loss[loss=0.1614, simple_loss=0.2495, pruned_loss=0.03666, over 18418.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2613, pruned_loss=0.04369, over 3586782.10 frames. ], batch size: 48, lr: 7.59e-03, grad_scale: 8.0 2023-03-09 06:55:23,294 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=52451.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:55:29,980 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.173e+02 2.872e+02 3.351e+02 3.820e+02 1.204e+03, threshold=6.703e+02, percent-clipped=1.0 2023-03-09 06:55:31,628 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6857, 2.8073, 4.4060, 3.8024, 2.5793, 4.5771, 3.8788, 2.7045], device='cuda:3'), covar=tensor([0.0492, 0.1511, 0.0216, 0.0359, 0.1707, 0.0194, 0.0503, 0.1120], device='cuda:3'), in_proj_covar=tensor([0.0205, 0.0235, 0.0182, 0.0152, 0.0221, 0.0198, 0.0228, 0.0197], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 06:55:54,080 INFO [train.py:898] (3/4) Epoch 15, batch 1600, loss[loss=0.1625, simple_loss=0.2428, pruned_loss=0.04107, over 18499.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2606, pruned_loss=0.04371, over 3584101.38 frames. ], batch size: 47, lr: 7.58e-03, grad_scale: 16.0 2023-03-09 06:56:52,740 INFO [train.py:898] (3/4) Epoch 15, batch 1650, loss[loss=0.1817, simple_loss=0.2725, pruned_loss=0.04548, over 17956.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2615, pruned_loss=0.04408, over 3576867.38 frames. ], batch size: 65, lr: 7.58e-03, grad_scale: 16.0 2023-03-09 06:57:16,407 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0874, 4.2828, 2.5852, 4.2525, 5.2588, 2.7748, 3.8225, 4.0250], device='cuda:3'), covar=tensor([0.0125, 0.1075, 0.1530, 0.0526, 0.0062, 0.1121, 0.0675, 0.0704], device='cuda:3'), in_proj_covar=tensor([0.0141, 0.0254, 0.0198, 0.0192, 0.0103, 0.0180, 0.0211, 0.0217], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 06:57:18,401 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=52549.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:57:25,840 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=52555.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 06:57:27,651 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.792e+02 2.929e+02 3.401e+02 4.256e+02 6.511e+02, threshold=6.802e+02, percent-clipped=0.0 2023-03-09 06:57:36,027 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4484, 5.9320, 5.4816, 5.7253, 5.4630, 5.4107, 5.9946, 5.8963], device='cuda:3'), covar=tensor([0.1167, 0.0765, 0.0461, 0.0724, 0.1558, 0.0670, 0.0600, 0.0744], device='cuda:3'), in_proj_covar=tensor([0.0570, 0.0476, 0.0352, 0.0508, 0.0691, 0.0503, 0.0678, 0.0504], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 06:57:50,951 INFO [train.py:898] (3/4) Epoch 15, batch 1700, loss[loss=0.2393, simple_loss=0.3123, pruned_loss=0.08319, over 12263.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2614, pruned_loss=0.0439, over 3573669.15 frames. ], batch size: 129, lr: 7.58e-03, grad_scale: 16.0 2023-03-09 06:58:22,300 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2633, 5.1372, 5.4329, 5.4520, 5.1515, 5.9776, 5.6330, 5.2653], device='cuda:3'), covar=tensor([0.1151, 0.0692, 0.0777, 0.0685, 0.1455, 0.0822, 0.0664, 0.1803], device='cuda:3'), in_proj_covar=tensor([0.0332, 0.0261, 0.0276, 0.0277, 0.0312, 0.0389, 0.0254, 0.0380], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0004, 0.0002, 0.0004], device='cuda:3') 2023-03-09 06:58:37,282 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=52616.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 06:58:49,955 INFO [train.py:898] (3/4) Epoch 15, batch 1750, loss[loss=0.183, simple_loss=0.2723, pruned_loss=0.04688, over 18039.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2618, pruned_loss=0.04393, over 3578354.94 frames. ], batch size: 65, lr: 7.57e-03, grad_scale: 8.0 2023-03-09 06:59:26,520 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.883e+02 2.725e+02 3.268e+02 4.051e+02 8.662e+02, threshold=6.535e+02, percent-clipped=2.0 2023-03-09 06:59:37,918 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=52668.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 06:59:48,455 INFO [train.py:898] (3/4) Epoch 15, batch 1800, loss[loss=0.17, simple_loss=0.2471, pruned_loss=0.04641, over 18292.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2621, pruned_loss=0.04387, over 3572294.00 frames. ], batch size: 49, lr: 7.57e-03, grad_scale: 8.0 2023-03-09 07:00:08,032 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8685, 4.5740, 4.6945, 3.4885, 3.8665, 3.6651, 2.6262, 2.4938], device='cuda:3'), covar=tensor([0.0219, 0.0155, 0.0071, 0.0280, 0.0318, 0.0202, 0.0738, 0.0787], device='cuda:3'), in_proj_covar=tensor([0.0063, 0.0052, 0.0054, 0.0063, 0.0083, 0.0060, 0.0073, 0.0079], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0004, 0.0006, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 07:00:11,360 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52696.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:00:35,844 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2650, 2.6217, 2.4062, 2.6425, 3.4113, 3.2734, 2.9308, 2.7310], device='cuda:3'), covar=tensor([0.0186, 0.0268, 0.0574, 0.0382, 0.0160, 0.0152, 0.0334, 0.0352], device='cuda:3'), in_proj_covar=tensor([0.0126, 0.0123, 0.0158, 0.0149, 0.0113, 0.0102, 0.0140, 0.0145], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 07:00:46,762 INFO [train.py:898] (3/4) Epoch 15, batch 1850, loss[loss=0.1781, simple_loss=0.2715, pruned_loss=0.04238, over 18360.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2621, pruned_loss=0.04375, over 3576808.01 frames. ], batch size: 56, lr: 7.57e-03, grad_scale: 8.0 2023-03-09 07:01:08,151 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=52744.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:01:17,478 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=52752.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:01:23,880 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.980e+02 2.718e+02 3.382e+02 4.103e+02 9.791e+02, threshold=6.764e+02, percent-clipped=3.0 2023-03-09 07:01:28,263 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5828, 5.2682, 5.7277, 5.6633, 5.3826, 6.2261, 5.8703, 5.5017], device='cuda:3'), covar=tensor([0.1028, 0.0624, 0.0626, 0.0662, 0.1523, 0.0691, 0.0674, 0.1661], device='cuda:3'), in_proj_covar=tensor([0.0333, 0.0262, 0.0275, 0.0278, 0.0311, 0.0387, 0.0254, 0.0380], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0004, 0.0002, 0.0004], device='cuda:3') 2023-03-09 07:01:46,237 INFO [train.py:898] (3/4) Epoch 15, batch 1900, loss[loss=0.1948, simple_loss=0.2844, pruned_loss=0.05266, over 18272.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2617, pruned_loss=0.04365, over 3564358.98 frames. ], batch size: 57, lr: 7.56e-03, grad_scale: 8.0 2023-03-09 07:01:56,199 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4819, 5.4338, 5.0333, 5.3315, 5.3276, 4.8118, 5.3041, 5.0593], device='cuda:3'), covar=tensor([0.0387, 0.0388, 0.1393, 0.0884, 0.0564, 0.0394, 0.0389, 0.0992], device='cuda:3'), in_proj_covar=tensor([0.0451, 0.0506, 0.0664, 0.0400, 0.0392, 0.0460, 0.0493, 0.0631], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 07:02:30,036 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=52813.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:02:45,702 INFO [train.py:898] (3/4) Epoch 15, batch 1950, loss[loss=0.198, simple_loss=0.2836, pruned_loss=0.0562, over 18567.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2611, pruned_loss=0.04342, over 3573503.23 frames. ], batch size: 54, lr: 7.56e-03, grad_scale: 8.0 2023-03-09 07:03:12,998 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52849.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:03:22,533 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.851e+02 2.886e+02 3.501e+02 4.238e+02 6.243e+02, threshold=7.003e+02, percent-clipped=0.0 2023-03-09 07:03:44,568 INFO [train.py:898] (3/4) Epoch 15, batch 2000, loss[loss=0.1925, simple_loss=0.2755, pruned_loss=0.05476, over 17115.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2625, pruned_loss=0.04407, over 3576442.52 frames. ], batch size: 78, lr: 7.56e-03, grad_scale: 8.0 2023-03-09 07:04:08,642 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=52897.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:04:25,200 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=52911.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 07:04:43,165 INFO [train.py:898] (3/4) Epoch 15, batch 2050, loss[loss=0.1851, simple_loss=0.2766, pruned_loss=0.04676, over 18208.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.2637, pruned_loss=0.0445, over 3562531.52 frames. ], batch size: 60, lr: 7.55e-03, grad_scale: 8.0 2023-03-09 07:04:45,841 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4903, 3.8640, 2.6401, 3.6772, 4.5825, 2.5385, 3.5629, 3.5605], device='cuda:3'), covar=tensor([0.0132, 0.0938, 0.1385, 0.0551, 0.0075, 0.1187, 0.0654, 0.0726], device='cuda:3'), in_proj_covar=tensor([0.0140, 0.0253, 0.0198, 0.0190, 0.0102, 0.0179, 0.0210, 0.0217], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 07:05:19,675 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.048e+02 3.109e+02 3.661e+02 4.518e+02 9.006e+02, threshold=7.322e+02, percent-clipped=1.0 2023-03-09 07:05:30,932 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52968.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:05:41,828 INFO [train.py:898] (3/4) Epoch 15, batch 2100, loss[loss=0.1621, simple_loss=0.2435, pruned_loss=0.04037, over 18247.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2619, pruned_loss=0.04381, over 3577427.66 frames. ], batch size: 45, lr: 7.55e-03, grad_scale: 8.0 2023-03-09 07:05:42,287 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8162, 3.6460, 5.0996, 4.3518, 3.2092, 3.0256, 4.4022, 5.1887], device='cuda:3'), covar=tensor([0.0830, 0.1585, 0.0152, 0.0380, 0.0930, 0.1113, 0.0349, 0.0219], device='cuda:3'), in_proj_covar=tensor([0.0141, 0.0257, 0.0125, 0.0171, 0.0182, 0.0182, 0.0182, 0.0170], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 07:05:43,613 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-09 07:06:16,163 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0673, 5.1927, 2.6300, 5.1238, 4.9587, 5.1991, 4.9287, 2.2944], device='cuda:3'), covar=tensor([0.0189, 0.0100, 0.0880, 0.0093, 0.0081, 0.0122, 0.0142, 0.1253], device='cuda:3'), in_proj_covar=tensor([0.0082, 0.0075, 0.0091, 0.0089, 0.0081, 0.0070, 0.0081, 0.0094], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 07:06:27,878 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-09 07:06:28,690 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=53016.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:06:41,337 INFO [train.py:898] (3/4) Epoch 15, batch 2150, loss[loss=0.1741, simple_loss=0.2698, pruned_loss=0.03925, over 18485.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2616, pruned_loss=0.04359, over 3580665.73 frames. ], batch size: 53, lr: 7.54e-03, grad_scale: 8.0 2023-03-09 07:07:15,755 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=53056.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:07:17,594 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.753e+02 2.734e+02 3.607e+02 4.426e+02 8.494e+02, threshold=7.214e+02, percent-clipped=2.0 2023-03-09 07:07:40,262 INFO [train.py:898] (3/4) Epoch 15, batch 2200, loss[loss=0.1803, simple_loss=0.2716, pruned_loss=0.04453, over 18110.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2616, pruned_loss=0.04351, over 3589514.84 frames. ], batch size: 62, lr: 7.54e-03, grad_scale: 8.0 2023-03-09 07:08:16,867 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=53108.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:08:27,990 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=53117.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:08:33,770 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7395, 3.6250, 3.5753, 3.1472, 3.5389, 2.8492, 2.7787, 3.6861], device='cuda:3'), covar=tensor([0.0041, 0.0076, 0.0062, 0.0110, 0.0072, 0.0147, 0.0159, 0.0056], device='cuda:3'), in_proj_covar=tensor([0.0117, 0.0138, 0.0118, 0.0170, 0.0123, 0.0166, 0.0168, 0.0100], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 07:08:39,040 INFO [train.py:898] (3/4) Epoch 15, batch 2250, loss[loss=0.1773, simple_loss=0.2707, pruned_loss=0.04189, over 18348.00 frames. ], tot_loss[loss=0.1735, simple_loss=0.2607, pruned_loss=0.04319, over 3596216.37 frames. ], batch size: 56, lr: 7.54e-03, grad_scale: 8.0 2023-03-09 07:08:57,336 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8382, 3.7596, 3.6928, 3.2837, 3.6264, 2.8436, 2.8071, 3.9171], device='cuda:3'), covar=tensor([0.0049, 0.0093, 0.0065, 0.0118, 0.0076, 0.0183, 0.0189, 0.0038], device='cuda:3'), in_proj_covar=tensor([0.0118, 0.0138, 0.0119, 0.0170, 0.0124, 0.0167, 0.0169, 0.0100], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 07:09:15,601 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.829e+02 2.883e+02 3.334e+02 3.992e+02 8.132e+02, threshold=6.668e+02, percent-clipped=1.0 2023-03-09 07:09:37,839 INFO [train.py:898] (3/4) Epoch 15, batch 2300, loss[loss=0.1788, simple_loss=0.2628, pruned_loss=0.04737, over 18279.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2614, pruned_loss=0.04353, over 3590878.87 frames. ], batch size: 60, lr: 7.53e-03, grad_scale: 8.0 2023-03-09 07:09:53,775 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5615, 2.7914, 2.6401, 2.8959, 3.6249, 3.5480, 3.0755, 2.9088], device='cuda:3'), covar=tensor([0.0176, 0.0279, 0.0496, 0.0309, 0.0198, 0.0146, 0.0311, 0.0367], device='cuda:3'), in_proj_covar=tensor([0.0129, 0.0124, 0.0159, 0.0150, 0.0116, 0.0103, 0.0142, 0.0146], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 07:10:18,754 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=53211.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 07:10:37,436 INFO [train.py:898] (3/4) Epoch 15, batch 2350, loss[loss=0.1606, simple_loss=0.24, pruned_loss=0.04066, over 18451.00 frames. ], tot_loss[loss=0.1741, simple_loss=0.2614, pruned_loss=0.04338, over 3584019.51 frames. ], batch size: 43, lr: 7.53e-03, grad_scale: 8.0 2023-03-09 07:10:54,388 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=53241.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:11:14,249 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.169e+02 3.108e+02 3.704e+02 4.390e+02 1.359e+03, threshold=7.409e+02, percent-clipped=1.0 2023-03-09 07:11:15,588 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=53259.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 07:11:20,258 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.2302, 5.5587, 2.9253, 5.3606, 5.2713, 5.5751, 5.2948, 2.8652], device='cuda:3'), covar=tensor([0.0163, 0.0052, 0.0749, 0.0069, 0.0065, 0.0065, 0.0081, 0.1020], device='cuda:3'), in_proj_covar=tensor([0.0084, 0.0075, 0.0092, 0.0088, 0.0081, 0.0071, 0.0081, 0.0096], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 07:11:36,236 INFO [train.py:898] (3/4) Epoch 15, batch 2400, loss[loss=0.1525, simple_loss=0.2331, pruned_loss=0.03595, over 18496.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.262, pruned_loss=0.04351, over 3581178.42 frames. ], batch size: 44, lr: 7.53e-03, grad_scale: 8.0 2023-03-09 07:12:04,966 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=53302.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:12:07,799 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=53304.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:12:20,540 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=53315.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:12:34,310 INFO [train.py:898] (3/4) Epoch 15, batch 2450, loss[loss=0.1822, simple_loss=0.2727, pruned_loss=0.04582, over 18316.00 frames. ], tot_loss[loss=0.1751, simple_loss=0.2625, pruned_loss=0.04389, over 3581608.02 frames. ], batch size: 54, lr: 7.52e-03, grad_scale: 8.0 2023-03-09 07:12:53,767 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9262, 4.5516, 4.5900, 3.4068, 3.7440, 3.6158, 2.6577, 2.3872], device='cuda:3'), covar=tensor([0.0191, 0.0122, 0.0079, 0.0264, 0.0338, 0.0184, 0.0696, 0.0833], device='cuda:3'), in_proj_covar=tensor([0.0065, 0.0053, 0.0054, 0.0063, 0.0085, 0.0061, 0.0075, 0.0081], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0004, 0.0006, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 07:13:10,220 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.975e+02 3.035e+02 3.526e+02 4.195e+02 8.583e+02, threshold=7.052e+02, percent-clipped=2.0 2023-03-09 07:13:19,474 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=53365.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:13:21,830 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=53367.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:13:32,563 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=53376.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:13:33,324 INFO [train.py:898] (3/4) Epoch 15, batch 2500, loss[loss=0.1655, simple_loss=0.2498, pruned_loss=0.04058, over 18387.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2623, pruned_loss=0.04405, over 3573108.89 frames. ], batch size: 50, lr: 7.52e-03, grad_scale: 8.0 2023-03-09 07:14:05,972 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0231, 5.0588, 5.1468, 4.8810, 4.7720, 4.8357, 5.2110, 5.2261], device='cuda:3'), covar=tensor([0.0062, 0.0049, 0.0049, 0.0079, 0.0060, 0.0122, 0.0058, 0.0070], device='cuda:3'), in_proj_covar=tensor([0.0087, 0.0063, 0.0066, 0.0085, 0.0069, 0.0095, 0.0081, 0.0081], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:3') 2023-03-09 07:14:09,325 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=53408.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:14:14,291 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=53412.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:14:25,265 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8727, 5.3065, 5.3438, 5.2944, 4.8552, 5.1813, 4.6334, 5.1569], device='cuda:3'), covar=tensor([0.0201, 0.0249, 0.0177, 0.0370, 0.0321, 0.0229, 0.1027, 0.0303], device='cuda:3'), in_proj_covar=tensor([0.0194, 0.0242, 0.0234, 0.0287, 0.0248, 0.0244, 0.0296, 0.0238], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 07:14:32,212 INFO [train.py:898] (3/4) Epoch 15, batch 2550, loss[loss=0.1936, simple_loss=0.284, pruned_loss=0.05163, over 17122.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2619, pruned_loss=0.04382, over 3581743.03 frames. ], batch size: 78, lr: 7.52e-03, grad_scale: 8.0 2023-03-09 07:14:33,646 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=53428.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 07:14:49,269 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9111, 4.7446, 4.8877, 4.6805, 4.5202, 4.8629, 5.0221, 4.9179], device='cuda:3'), covar=tensor([0.0103, 0.0126, 0.0112, 0.0142, 0.0124, 0.0187, 0.0128, 0.0180], device='cuda:3'), in_proj_covar=tensor([0.0087, 0.0063, 0.0066, 0.0085, 0.0069, 0.0095, 0.0081, 0.0081], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:3') 2023-03-09 07:15:05,469 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=53456.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:15:07,569 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.050e+02 2.986e+02 3.579e+02 4.675e+02 1.603e+03, threshold=7.159e+02, percent-clipped=6.0 2023-03-09 07:15:30,223 INFO [train.py:898] (3/4) Epoch 15, batch 2600, loss[loss=0.1612, simple_loss=0.2514, pruned_loss=0.03551, over 18499.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2616, pruned_loss=0.04366, over 3581894.79 frames. ], batch size: 51, lr: 7.51e-03, grad_scale: 8.0 2023-03-09 07:15:37,945 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3057, 5.1514, 5.5325, 5.4968, 5.1958, 5.9975, 5.6663, 5.2309], device='cuda:3'), covar=tensor([0.1039, 0.0652, 0.0801, 0.0748, 0.1396, 0.0712, 0.0672, 0.1651], device='cuda:3'), in_proj_covar=tensor([0.0336, 0.0263, 0.0282, 0.0280, 0.0315, 0.0392, 0.0256, 0.0384], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0002, 0.0004], device='cuda:3') 2023-03-09 07:15:59,312 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1397, 3.8454, 5.1668, 2.8032, 4.5299, 2.6996, 3.0899, 1.9011], device='cuda:3'), covar=tensor([0.0932, 0.0845, 0.0123, 0.0949, 0.0515, 0.2396, 0.2779, 0.2068], device='cuda:3'), in_proj_covar=tensor([0.0207, 0.0226, 0.0145, 0.0182, 0.0241, 0.0256, 0.0300, 0.0219], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 07:16:29,544 INFO [train.py:898] (3/4) Epoch 15, batch 2650, loss[loss=0.1666, simple_loss=0.26, pruned_loss=0.03656, over 18394.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2617, pruned_loss=0.04311, over 3596339.54 frames. ], batch size: 52, lr: 7.51e-03, grad_scale: 8.0 2023-03-09 07:17:05,010 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.23 vs. limit=5.0 2023-03-09 07:17:05,492 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.951e+02 2.884e+02 3.324e+02 4.046e+02 9.778e+02, threshold=6.647e+02, percent-clipped=2.0 2023-03-09 07:17:27,744 INFO [train.py:898] (3/4) Epoch 15, batch 2700, loss[loss=0.1875, simple_loss=0.2785, pruned_loss=0.0483, over 18567.00 frames. ], tot_loss[loss=0.1738, simple_loss=0.2614, pruned_loss=0.04304, over 3593750.38 frames. ], batch size: 54, lr: 7.51e-03, grad_scale: 8.0 2023-03-09 07:17:51,494 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=53597.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:18:26,759 INFO [train.py:898] (3/4) Epoch 15, batch 2750, loss[loss=0.205, simple_loss=0.2937, pruned_loss=0.05817, over 18227.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2613, pruned_loss=0.04298, over 3596596.50 frames. ], batch size: 60, lr: 7.50e-03, grad_scale: 8.0 2023-03-09 07:18:57,819 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.37 vs. limit=5.0 2023-03-09 07:19:03,249 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.144e+02 2.856e+02 3.450e+02 4.147e+02 9.079e+02, threshold=6.900e+02, percent-clipped=4.0 2023-03-09 07:19:05,874 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=53660.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:19:17,128 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7982, 3.6142, 4.9881, 2.8509, 4.4272, 2.6219, 2.9865, 1.8334], device='cuda:3'), covar=tensor([0.1047, 0.0838, 0.0129, 0.0826, 0.0507, 0.2343, 0.2768, 0.2021], device='cuda:3'), in_proj_covar=tensor([0.0205, 0.0224, 0.0144, 0.0181, 0.0239, 0.0255, 0.0299, 0.0218], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 07:19:18,601 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=53671.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:19:19,108 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.20 vs. limit=5.0 2023-03-09 07:19:23,427 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.59 vs. limit=2.0 2023-03-09 07:19:24,962 INFO [train.py:898] (3/4) Epoch 15, batch 2800, loss[loss=0.1542, simple_loss=0.2484, pruned_loss=0.03003, over 18370.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.262, pruned_loss=0.04338, over 3595445.75 frames. ], batch size: 50, lr: 7.50e-03, grad_scale: 8.0 2023-03-09 07:19:36,647 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9121, 4.7039, 4.7682, 3.5092, 3.9101, 3.5658, 2.9668, 2.4525], device='cuda:3'), covar=tensor([0.0189, 0.0152, 0.0068, 0.0284, 0.0286, 0.0198, 0.0671, 0.0882], device='cuda:3'), in_proj_covar=tensor([0.0064, 0.0053, 0.0055, 0.0063, 0.0084, 0.0061, 0.0075, 0.0080], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0004, 0.0006, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 07:20:06,967 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=53712.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:20:08,374 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6093, 2.2220, 2.4813, 2.5231, 3.1330, 4.5636, 4.2072, 3.4155], device='cuda:3'), covar=tensor([0.1505, 0.2233, 0.2700, 0.1713, 0.1981, 0.0183, 0.0444, 0.0707], device='cuda:3'), in_proj_covar=tensor([0.0271, 0.0325, 0.0351, 0.0262, 0.0377, 0.0213, 0.0277, 0.0231], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 07:20:20,003 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=53723.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 07:20:24,338 INFO [train.py:898] (3/4) Epoch 15, batch 2850, loss[loss=0.1551, simple_loss=0.2407, pruned_loss=0.03477, over 18362.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2619, pruned_loss=0.04323, over 3587839.07 frames. ], batch size: 46, lr: 7.50e-03, grad_scale: 8.0 2023-03-09 07:20:35,752 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.61 vs. limit=2.0 2023-03-09 07:20:35,794 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-09 07:20:47,561 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9910, 5.3499, 2.6046, 5.2157, 5.0595, 5.3759, 5.1299, 2.7412], device='cuda:3'), covar=tensor([0.0192, 0.0064, 0.0803, 0.0068, 0.0073, 0.0066, 0.0094, 0.0985], device='cuda:3'), in_proj_covar=tensor([0.0083, 0.0075, 0.0091, 0.0089, 0.0081, 0.0071, 0.0081, 0.0095], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 07:21:00,855 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.810e+02 2.667e+02 3.515e+02 4.128e+02 7.427e+02, threshold=7.030e+02, percent-clipped=1.0 2023-03-09 07:21:03,257 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=53760.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:21:04,923 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-09 07:21:22,593 INFO [train.py:898] (3/4) Epoch 15, batch 2900, loss[loss=0.1679, simple_loss=0.2442, pruned_loss=0.04578, over 17196.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2623, pruned_loss=0.04355, over 3590169.77 frames. ], batch size: 38, lr: 7.49e-03, grad_scale: 8.0 2023-03-09 07:21:24,842 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7566, 2.2470, 2.6948, 2.7466, 3.2744, 5.0432, 4.7048, 3.7238], device='cuda:3'), covar=tensor([0.1521, 0.2282, 0.2580, 0.1638, 0.2101, 0.0152, 0.0381, 0.0693], device='cuda:3'), in_proj_covar=tensor([0.0270, 0.0324, 0.0349, 0.0261, 0.0376, 0.0213, 0.0276, 0.0230], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 07:22:21,621 INFO [train.py:898] (3/4) Epoch 15, batch 2950, loss[loss=0.1804, simple_loss=0.2686, pruned_loss=0.04609, over 17274.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2613, pruned_loss=0.04297, over 3596231.91 frames. ], batch size: 78, lr: 7.49e-03, grad_scale: 8.0 2023-03-09 07:22:58,162 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.012e+02 2.809e+02 3.403e+02 4.048e+02 1.283e+03, threshold=6.805e+02, percent-clipped=2.0 2023-03-09 07:23:02,056 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.2316, 4.6427, 4.6029, 4.6522, 4.2577, 4.5720, 4.0317, 4.5606], device='cuda:3'), covar=tensor([0.0267, 0.0304, 0.0259, 0.0420, 0.0364, 0.0254, 0.1093, 0.0308], device='cuda:3'), in_proj_covar=tensor([0.0196, 0.0243, 0.0234, 0.0290, 0.0248, 0.0245, 0.0297, 0.0239], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 07:23:20,581 INFO [train.py:898] (3/4) Epoch 15, batch 3000, loss[loss=0.181, simple_loss=0.2708, pruned_loss=0.04555, over 18397.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2619, pruned_loss=0.0433, over 3589187.17 frames. ], batch size: 52, lr: 7.49e-03, grad_scale: 8.0 2023-03-09 07:23:20,581 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 07:23:32,491 INFO [train.py:932] (3/4) Epoch 15, validation: loss=0.1532, simple_loss=0.254, pruned_loss=0.02619, over 944034.00 frames. 2023-03-09 07:23:32,492 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 07:23:56,157 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=53897.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:24:21,675 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8291, 3.6259, 5.0498, 4.3526, 3.2208, 3.0821, 4.4851, 5.3144], device='cuda:3'), covar=tensor([0.0742, 0.1625, 0.0171, 0.0389, 0.0911, 0.1125, 0.0356, 0.0190], device='cuda:3'), in_proj_covar=tensor([0.0142, 0.0263, 0.0128, 0.0174, 0.0185, 0.0185, 0.0186, 0.0173], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 07:24:30,757 INFO [train.py:898] (3/4) Epoch 15, batch 3050, loss[loss=0.2046, simple_loss=0.2878, pruned_loss=0.06065, over 18353.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2621, pruned_loss=0.04351, over 3571722.41 frames. ], batch size: 56, lr: 7.48e-03, grad_scale: 4.0 2023-03-09 07:24:52,740 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=53945.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:25:08,738 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.036e+02 2.885e+02 3.297e+02 3.882e+02 9.425e+02, threshold=6.594e+02, percent-clipped=2.0 2023-03-09 07:25:10,277 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=53960.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:25:19,372 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9159, 4.5587, 4.6765, 3.5624, 3.8783, 3.5650, 2.7237, 2.3159], device='cuda:3'), covar=tensor([0.0196, 0.0172, 0.0059, 0.0248, 0.0297, 0.0186, 0.0710, 0.0894], device='cuda:3'), in_proj_covar=tensor([0.0065, 0.0054, 0.0055, 0.0064, 0.0085, 0.0062, 0.0076, 0.0081], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 07:25:22,568 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=53971.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:25:29,403 INFO [train.py:898] (3/4) Epoch 15, batch 3100, loss[loss=0.1765, simple_loss=0.2595, pruned_loss=0.04676, over 18289.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.262, pruned_loss=0.04368, over 3576418.00 frames. ], batch size: 49, lr: 7.48e-03, grad_scale: 4.0 2023-03-09 07:25:30,081 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.56 vs. limit=2.0 2023-03-09 07:25:31,874 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4150, 5.3827, 5.0107, 5.3116, 5.2409, 4.6953, 5.2112, 4.9610], device='cuda:3'), covar=tensor([0.0401, 0.0424, 0.1338, 0.0819, 0.0577, 0.0401, 0.0405, 0.1137], device='cuda:3'), in_proj_covar=tensor([0.0446, 0.0516, 0.0668, 0.0399, 0.0394, 0.0465, 0.0496, 0.0634], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 07:26:10,177 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=54008.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:26:23,266 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=54019.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:26:23,663 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.47 vs. limit=5.0 2023-03-09 07:26:27,829 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=54023.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 07:26:29,072 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6409, 3.5321, 4.8276, 4.1606, 3.0358, 2.7935, 4.2913, 5.0774], device='cuda:3'), covar=tensor([0.0780, 0.1462, 0.0161, 0.0420, 0.1019, 0.1251, 0.0393, 0.0214], device='cuda:3'), in_proj_covar=tensor([0.0139, 0.0258, 0.0125, 0.0171, 0.0181, 0.0181, 0.0183, 0.0170], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 07:26:31,793 INFO [train.py:898] (3/4) Epoch 15, batch 3150, loss[loss=0.1626, simple_loss=0.2584, pruned_loss=0.03336, over 18402.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2623, pruned_loss=0.0438, over 3582072.26 frames. ], batch size: 52, lr: 7.48e-03, grad_scale: 4.0 2023-03-09 07:26:48,703 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6807, 3.5667, 2.1318, 4.5199, 3.1838, 4.5366, 2.7082, 4.0513], device='cuda:3'), covar=tensor([0.0597, 0.0779, 0.1466, 0.0528, 0.0847, 0.0280, 0.1048, 0.0372], device='cuda:3'), in_proj_covar=tensor([0.0208, 0.0220, 0.0186, 0.0268, 0.0188, 0.0257, 0.0198, 0.0195], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 07:27:05,975 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.50 vs. limit=2.0 2023-03-09 07:27:09,867 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.040e+02 2.859e+02 3.550e+02 4.150e+02 8.480e+02, threshold=7.099e+02, percent-clipped=1.0 2023-03-09 07:27:19,899 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5900, 2.2101, 2.5686, 2.7204, 3.2976, 4.9920, 4.7047, 3.5103], device='cuda:3'), covar=tensor([0.1581, 0.2225, 0.2730, 0.1576, 0.2065, 0.0162, 0.0342, 0.0734], device='cuda:3'), in_proj_covar=tensor([0.0268, 0.0323, 0.0349, 0.0259, 0.0373, 0.0212, 0.0275, 0.0229], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 07:27:24,062 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=54071.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:27:30,818 INFO [train.py:898] (3/4) Epoch 15, batch 3200, loss[loss=0.1669, simple_loss=0.256, pruned_loss=0.03889, over 18305.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2625, pruned_loss=0.04368, over 3586765.66 frames. ], batch size: 57, lr: 7.47e-03, grad_scale: 8.0 2023-03-09 07:28:28,994 INFO [train.py:898] (3/4) Epoch 15, batch 3250, loss[loss=0.1821, simple_loss=0.2777, pruned_loss=0.04324, over 18135.00 frames. ], tot_loss[loss=0.1741, simple_loss=0.2614, pruned_loss=0.04341, over 3582766.98 frames. ], batch size: 62, lr: 7.47e-03, grad_scale: 8.0 2023-03-09 07:29:07,308 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.872e+02 2.643e+02 3.129e+02 4.018e+02 8.489e+02, threshold=6.258e+02, percent-clipped=1.0 2023-03-09 07:29:28,035 INFO [train.py:898] (3/4) Epoch 15, batch 3300, loss[loss=0.1861, simple_loss=0.2696, pruned_loss=0.05129, over 18410.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2619, pruned_loss=0.04345, over 3586578.43 frames. ], batch size: 48, lr: 7.46e-03, grad_scale: 8.0 2023-03-09 07:30:27,872 INFO [train.py:898] (3/4) Epoch 15, batch 3350, loss[loss=0.1704, simple_loss=0.2673, pruned_loss=0.03676, over 18562.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2616, pruned_loss=0.0432, over 3587393.23 frames. ], batch size: 54, lr: 7.46e-03, grad_scale: 8.0 2023-03-09 07:31:03,982 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7717, 3.5278, 4.4987, 2.9303, 3.9595, 2.6951, 2.8534, 2.1063], device='cuda:3'), covar=tensor([0.1011, 0.0786, 0.0148, 0.0710, 0.0624, 0.2083, 0.2378, 0.1773], device='cuda:3'), in_proj_covar=tensor([0.0211, 0.0230, 0.0147, 0.0185, 0.0247, 0.0260, 0.0306, 0.0224], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 07:31:05,786 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.725e+02 2.934e+02 3.343e+02 4.171e+02 1.510e+03, threshold=6.685e+02, percent-clipped=4.0 2023-03-09 07:31:26,834 INFO [train.py:898] (3/4) Epoch 15, batch 3400, loss[loss=0.1535, simple_loss=0.2417, pruned_loss=0.03267, over 18353.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2613, pruned_loss=0.04323, over 3579164.34 frames. ], batch size: 46, lr: 7.46e-03, grad_scale: 8.0 2023-03-09 07:31:32,205 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-09 07:32:24,911 INFO [train.py:898] (3/4) Epoch 15, batch 3450, loss[loss=0.1791, simple_loss=0.2695, pruned_loss=0.0443, over 17115.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2614, pruned_loss=0.04322, over 3588239.17 frames. ], batch size: 78, lr: 7.45e-03, grad_scale: 8.0 2023-03-09 07:33:02,927 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.097e+02 3.124e+02 3.686e+02 4.996e+02 1.186e+03, threshold=7.372e+02, percent-clipped=7.0 2023-03-09 07:33:23,261 INFO [train.py:898] (3/4) Epoch 15, batch 3500, loss[loss=0.1582, simple_loss=0.239, pruned_loss=0.03876, over 18260.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2621, pruned_loss=0.04315, over 3586965.15 frames. ], batch size: 45, lr: 7.45e-03, grad_scale: 8.0 2023-03-09 07:34:01,284 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-09 07:34:20,581 INFO [train.py:898] (3/4) Epoch 15, batch 3550, loss[loss=0.2099, simple_loss=0.296, pruned_loss=0.06196, over 18507.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2621, pruned_loss=0.04331, over 3597596.36 frames. ], batch size: 53, lr: 7.45e-03, grad_scale: 8.0 2023-03-09 07:34:45,917 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.86 vs. limit=5.0 2023-03-09 07:34:54,890 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.932e+02 3.002e+02 3.469e+02 4.157e+02 6.561e+02, threshold=6.938e+02, percent-clipped=0.0 2023-03-09 07:35:13,937 INFO [train.py:898] (3/4) Epoch 15, batch 3600, loss[loss=0.187, simple_loss=0.2804, pruned_loss=0.04683, over 16131.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2619, pruned_loss=0.04348, over 3596817.41 frames. ], batch size: 94, lr: 7.44e-03, grad_scale: 8.0 2023-03-09 07:36:16,288 INFO [train.py:898] (3/4) Epoch 16, batch 0, loss[loss=0.1613, simple_loss=0.2432, pruned_loss=0.0397, over 18385.00 frames. ], tot_loss[loss=0.1613, simple_loss=0.2432, pruned_loss=0.0397, over 18385.00 frames. ], batch size: 46, lr: 7.20e-03, grad_scale: 8.0 2023-03-09 07:36:16,288 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 07:36:28,070 INFO [train.py:932] (3/4) Epoch 16, validation: loss=0.1541, simple_loss=0.2552, pruned_loss=0.02651, over 944034.00 frames. 2023-03-09 07:36:28,070 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 07:37:24,164 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.170e+02 3.007e+02 3.556e+02 4.370e+02 7.449e+02, threshold=7.113e+02, percent-clipped=5.0 2023-03-09 07:37:26,544 INFO [train.py:898] (3/4) Epoch 16, batch 50, loss[loss=0.1729, simple_loss=0.2677, pruned_loss=0.039, over 18006.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2635, pruned_loss=0.0436, over 809518.45 frames. ], batch size: 65, lr: 7.20e-03, grad_scale: 8.0 2023-03-09 07:37:42,945 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1236, 3.8723, 5.2471, 3.0663, 4.5870, 2.7721, 3.2620, 1.9600], device='cuda:3'), covar=tensor([0.0964, 0.0818, 0.0101, 0.0832, 0.0527, 0.2378, 0.2406, 0.1978], device='cuda:3'), in_proj_covar=tensor([0.0209, 0.0228, 0.0148, 0.0184, 0.0246, 0.0258, 0.0304, 0.0222], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 07:38:25,632 INFO [train.py:898] (3/4) Epoch 16, batch 100, loss[loss=0.1504, simple_loss=0.227, pruned_loss=0.03691, over 18410.00 frames. ], tot_loss[loss=0.1738, simple_loss=0.2624, pruned_loss=0.04261, over 1433951.34 frames. ], batch size: 43, lr: 7.20e-03, grad_scale: 8.0 2023-03-09 07:39:01,237 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9646, 4.2402, 2.5073, 4.2261, 5.1716, 2.6424, 3.7639, 3.9878], device='cuda:3'), covar=tensor([0.0138, 0.1053, 0.1518, 0.0533, 0.0060, 0.1191, 0.0672, 0.0652], device='cuda:3'), in_proj_covar=tensor([0.0145, 0.0254, 0.0199, 0.0191, 0.0105, 0.0181, 0.0210, 0.0216], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 07:39:21,646 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.042e+02 2.818e+02 3.254e+02 3.972e+02 9.345e+02, threshold=6.508e+02, percent-clipped=3.0 2023-03-09 07:39:23,843 INFO [train.py:898] (3/4) Epoch 16, batch 150, loss[loss=0.1561, simple_loss=0.2437, pruned_loss=0.03425, over 18361.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2613, pruned_loss=0.04263, over 1921657.55 frames. ], batch size: 46, lr: 7.19e-03, grad_scale: 8.0 2023-03-09 07:39:27,635 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=54664.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 07:39:31,899 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9700, 4.9485, 5.0241, 4.8135, 4.7889, 4.8191, 5.1956, 5.1759], device='cuda:3'), covar=tensor([0.0071, 0.0073, 0.0063, 0.0088, 0.0067, 0.0132, 0.0083, 0.0104], device='cuda:3'), in_proj_covar=tensor([0.0087, 0.0063, 0.0067, 0.0085, 0.0069, 0.0095, 0.0081, 0.0081], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:3') 2023-03-09 07:40:22,368 INFO [train.py:898] (3/4) Epoch 16, batch 200, loss[loss=0.1731, simple_loss=0.2653, pruned_loss=0.04047, over 18485.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2614, pruned_loss=0.04272, over 2296198.23 frames. ], batch size: 53, lr: 7.19e-03, grad_scale: 8.0 2023-03-09 07:40:38,430 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=54725.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 07:41:07,542 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.93 vs. limit=2.0 2023-03-09 07:41:17,922 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.955e+02 3.015e+02 3.636e+02 4.590e+02 9.600e+02, threshold=7.273e+02, percent-clipped=5.0 2023-03-09 07:41:20,201 INFO [train.py:898] (3/4) Epoch 16, batch 250, loss[loss=0.1524, simple_loss=0.2316, pruned_loss=0.03662, over 18400.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2613, pruned_loss=0.04301, over 2582972.17 frames. ], batch size: 42, lr: 7.19e-03, grad_scale: 8.0 2023-03-09 07:42:17,939 INFO [train.py:898] (3/4) Epoch 16, batch 300, loss[loss=0.1355, simple_loss=0.2158, pruned_loss=0.02762, over 18429.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2614, pruned_loss=0.04317, over 2800631.55 frames. ], batch size: 43, lr: 7.18e-03, grad_scale: 8.0 2023-03-09 07:42:33,464 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8849, 2.9855, 4.3776, 3.9253, 3.1301, 4.6147, 3.9158, 3.1199], device='cuda:3'), covar=tensor([0.0442, 0.1334, 0.0219, 0.0341, 0.1140, 0.0216, 0.0578, 0.0846], device='cuda:3'), in_proj_covar=tensor([0.0204, 0.0234, 0.0186, 0.0154, 0.0221, 0.0203, 0.0235, 0.0196], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 07:42:34,950 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-09 07:42:36,890 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4145, 3.7307, 2.4093, 3.7342, 4.5946, 2.5035, 3.4638, 3.6370], device='cuda:3'), covar=tensor([0.0174, 0.1141, 0.1594, 0.0619, 0.0086, 0.1230, 0.0717, 0.0705], device='cuda:3'), in_proj_covar=tensor([0.0144, 0.0253, 0.0197, 0.0190, 0.0105, 0.0178, 0.0208, 0.0214], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 07:43:13,467 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8106, 4.8192, 4.9466, 4.6265, 4.7543, 4.6760, 4.9699, 5.0206], device='cuda:3'), covar=tensor([0.0068, 0.0065, 0.0057, 0.0100, 0.0057, 0.0128, 0.0097, 0.0090], device='cuda:3'), in_proj_covar=tensor([0.0089, 0.0064, 0.0068, 0.0088, 0.0071, 0.0098, 0.0083, 0.0083], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 07:43:13,636 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4825, 2.1945, 2.5835, 2.6382, 3.0946, 4.7523, 4.4078, 3.3676], device='cuda:3'), covar=tensor([0.1665, 0.2390, 0.2771, 0.1692, 0.2274, 0.0200, 0.0448, 0.0809], device='cuda:3'), in_proj_covar=tensor([0.0271, 0.0328, 0.0355, 0.0263, 0.0377, 0.0215, 0.0279, 0.0233], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 07:43:14,224 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.101e+02 2.934e+02 3.524e+02 4.515e+02 1.434e+03, threshold=7.048e+02, percent-clipped=4.0 2023-03-09 07:43:16,397 INFO [train.py:898] (3/4) Epoch 16, batch 350, loss[loss=0.2102, simple_loss=0.2978, pruned_loss=0.06135, over 15947.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2619, pruned_loss=0.04343, over 2981684.06 frames. ], batch size: 94, lr: 7.18e-03, grad_scale: 8.0 2023-03-09 07:43:17,894 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=54862.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:44:15,027 INFO [train.py:898] (3/4) Epoch 16, batch 400, loss[loss=0.1592, simple_loss=0.2455, pruned_loss=0.03645, over 18400.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2623, pruned_loss=0.04367, over 3105730.37 frames. ], batch size: 50, lr: 7.18e-03, grad_scale: 8.0 2023-03-09 07:44:28,880 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=54923.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 07:45:00,728 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8448, 4.4303, 4.6156, 3.3282, 3.7025, 3.4704, 2.6954, 2.5826], device='cuda:3'), covar=tensor([0.0212, 0.0174, 0.0076, 0.0330, 0.0378, 0.0246, 0.0745, 0.0825], device='cuda:3'), in_proj_covar=tensor([0.0066, 0.0055, 0.0056, 0.0065, 0.0087, 0.0063, 0.0077, 0.0083], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 07:45:00,764 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6131, 3.4153, 2.1258, 4.3557, 2.9691, 4.3146, 2.4140, 3.9621], device='cuda:3'), covar=tensor([0.0597, 0.0791, 0.1493, 0.0469, 0.0871, 0.0353, 0.1184, 0.0389], device='cuda:3'), in_proj_covar=tensor([0.0207, 0.0221, 0.0186, 0.0268, 0.0188, 0.0256, 0.0196, 0.0196], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 07:45:01,411 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-09 07:45:11,977 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.963e+02 2.717e+02 3.272e+02 3.926e+02 7.148e+02, threshold=6.544e+02, percent-clipped=1.0 2023-03-09 07:45:13,180 INFO [train.py:898] (3/4) Epoch 16, batch 450, loss[loss=0.1836, simple_loss=0.2738, pruned_loss=0.04672, over 17124.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2624, pruned_loss=0.04343, over 3210606.93 frames. ], batch size: 78, lr: 7.17e-03, grad_scale: 8.0 2023-03-09 07:46:12,514 INFO [train.py:898] (3/4) Epoch 16, batch 500, loss[loss=0.1399, simple_loss=0.2239, pruned_loss=0.02789, over 17669.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.261, pruned_loss=0.04272, over 3301108.70 frames. ], batch size: 39, lr: 7.17e-03, grad_scale: 8.0 2023-03-09 07:46:23,095 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=55020.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 07:46:23,204 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3486, 5.3228, 4.8902, 5.2881, 5.2292, 4.6678, 5.1582, 4.9175], device='cuda:3'), covar=tensor([0.0449, 0.0458, 0.1551, 0.0751, 0.0602, 0.0433, 0.0463, 0.0962], device='cuda:3'), in_proj_covar=tensor([0.0442, 0.0508, 0.0667, 0.0400, 0.0396, 0.0467, 0.0498, 0.0638], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 07:46:48,283 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-09 07:47:09,466 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.142e+02 2.826e+02 3.526e+02 4.225e+02 1.034e+03, threshold=7.052e+02, percent-clipped=4.0 2023-03-09 07:47:10,627 INFO [train.py:898] (3/4) Epoch 16, batch 550, loss[loss=0.1641, simple_loss=0.2461, pruned_loss=0.04104, over 18254.00 frames. ], tot_loss[loss=0.1737, simple_loss=0.2619, pruned_loss=0.04272, over 3374503.27 frames. ], batch size: 45, lr: 7.17e-03, grad_scale: 8.0 2023-03-09 07:48:09,165 INFO [train.py:898] (3/4) Epoch 16, batch 600, loss[loss=0.1647, simple_loss=0.2474, pruned_loss=0.04099, over 18254.00 frames. ], tot_loss[loss=0.1737, simple_loss=0.2619, pruned_loss=0.04273, over 3427740.54 frames. ], batch size: 47, lr: 7.16e-03, grad_scale: 8.0 2023-03-09 07:48:51,679 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=55148.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:49:05,763 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.722e+02 2.752e+02 3.250e+02 4.071e+02 8.362e+02, threshold=6.500e+02, percent-clipped=2.0 2023-03-09 07:49:06,165 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9853, 5.0430, 5.1305, 4.8108, 4.7764, 4.9032, 5.1922, 5.2389], device='cuda:3'), covar=tensor([0.0065, 0.0070, 0.0049, 0.0110, 0.0069, 0.0141, 0.0097, 0.0110], device='cuda:3'), in_proj_covar=tensor([0.0089, 0.0065, 0.0069, 0.0088, 0.0071, 0.0098, 0.0083, 0.0083], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 07:49:06,963 INFO [train.py:898] (3/4) Epoch 16, batch 650, loss[loss=0.1509, simple_loss=0.2374, pruned_loss=0.03224, over 18373.00 frames. ], tot_loss[loss=0.1741, simple_loss=0.2625, pruned_loss=0.04286, over 3462595.40 frames. ], batch size: 46, lr: 7.16e-03, grad_scale: 8.0 2023-03-09 07:50:04,091 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=55209.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:50:05,814 INFO [train.py:898] (3/4) Epoch 16, batch 700, loss[loss=0.1789, simple_loss=0.2618, pruned_loss=0.04805, over 18377.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2629, pruned_loss=0.04307, over 3487796.26 frames. ], batch size: 50, lr: 7.16e-03, grad_scale: 4.0 2023-03-09 07:50:06,131 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=55211.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:50:14,562 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=55218.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 07:51:02,239 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5279, 4.6123, 4.6509, 4.3917, 4.4320, 4.4263, 4.7080, 4.7263], device='cuda:3'), covar=tensor([0.0080, 0.0062, 0.0061, 0.0099, 0.0069, 0.0135, 0.0073, 0.0093], device='cuda:3'), in_proj_covar=tensor([0.0088, 0.0064, 0.0068, 0.0087, 0.0070, 0.0097, 0.0082, 0.0083], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 07:51:02,972 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.749e+02 2.606e+02 3.222e+02 3.898e+02 7.462e+02, threshold=6.443e+02, percent-clipped=3.0 2023-03-09 07:51:02,999 INFO [train.py:898] (3/4) Epoch 16, batch 750, loss[loss=0.1768, simple_loss=0.2682, pruned_loss=0.04264, over 17119.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2623, pruned_loss=0.04274, over 3523824.36 frames. ], batch size: 78, lr: 7.15e-03, grad_scale: 4.0 2023-03-09 07:51:17,386 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=55272.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:51:21,988 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1478, 4.2520, 2.5446, 4.2812, 5.4137, 2.9943, 3.7296, 3.8269], device='cuda:3'), covar=tensor([0.0111, 0.1159, 0.1422, 0.0527, 0.0047, 0.0908, 0.0624, 0.0851], device='cuda:3'), in_proj_covar=tensor([0.0145, 0.0252, 0.0196, 0.0190, 0.0104, 0.0177, 0.0208, 0.0217], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 07:52:01,602 INFO [train.py:898] (3/4) Epoch 16, batch 800, loss[loss=0.1547, simple_loss=0.2465, pruned_loss=0.03146, over 18502.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2614, pruned_loss=0.04254, over 3535935.45 frames. ], batch size: 51, lr: 7.15e-03, grad_scale: 8.0 2023-03-09 07:52:13,198 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=55320.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 07:52:46,445 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5480, 2.7840, 2.4818, 2.8524, 3.6480, 3.5468, 3.1041, 2.9344], device='cuda:3'), covar=tensor([0.0139, 0.0238, 0.0563, 0.0342, 0.0147, 0.0149, 0.0360, 0.0390], device='cuda:3'), in_proj_covar=tensor([0.0130, 0.0126, 0.0159, 0.0149, 0.0115, 0.0104, 0.0145, 0.0144], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 07:53:00,378 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.160e+02 2.840e+02 3.265e+02 3.800e+02 8.424e+02, threshold=6.530e+02, percent-clipped=5.0 2023-03-09 07:53:00,403 INFO [train.py:898] (3/4) Epoch 16, batch 850, loss[loss=0.2057, simple_loss=0.286, pruned_loss=0.06273, over 12960.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2614, pruned_loss=0.04261, over 3541609.92 frames. ], batch size: 129, lr: 7.15e-03, grad_scale: 8.0 2023-03-09 07:53:08,445 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=55368.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 07:53:59,254 INFO [train.py:898] (3/4) Epoch 16, batch 900, loss[loss=0.1604, simple_loss=0.2401, pruned_loss=0.04033, over 16875.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2605, pruned_loss=0.04238, over 3552234.40 frames. ], batch size: 37, lr: 7.15e-03, grad_scale: 8.0 2023-03-09 07:54:05,316 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9731, 5.3654, 2.8570, 5.1812, 5.1214, 5.3710, 5.1075, 2.6269], device='cuda:3'), covar=tensor([0.0201, 0.0070, 0.0707, 0.0076, 0.0071, 0.0083, 0.0115, 0.0993], device='cuda:3'), in_proj_covar=tensor([0.0083, 0.0075, 0.0092, 0.0087, 0.0080, 0.0071, 0.0081, 0.0094], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 07:54:36,643 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-09 07:54:57,055 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.979e+02 2.881e+02 3.338e+02 4.155e+02 1.042e+03, threshold=6.676e+02, percent-clipped=4.0 2023-03-09 07:54:57,080 INFO [train.py:898] (3/4) Epoch 16, batch 950, loss[loss=0.175, simple_loss=0.2622, pruned_loss=0.04391, over 18377.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.2607, pruned_loss=0.04216, over 3556874.23 frames. ], batch size: 56, lr: 7.14e-03, grad_scale: 8.0 2023-03-09 07:55:43,855 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.89 vs. limit=2.0 2023-03-09 07:55:46,652 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=55504.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:55:54,399 INFO [train.py:898] (3/4) Epoch 16, batch 1000, loss[loss=0.1984, simple_loss=0.2835, pruned_loss=0.05663, over 18340.00 frames. ], tot_loss[loss=0.1737, simple_loss=0.2619, pruned_loss=0.04276, over 3566445.66 frames. ], batch size: 56, lr: 7.14e-03, grad_scale: 8.0 2023-03-09 07:56:03,327 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=55518.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 07:56:53,733 INFO [train.py:898] (3/4) Epoch 16, batch 1050, loss[loss=0.1404, simple_loss=0.2164, pruned_loss=0.03221, over 17708.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2615, pruned_loss=0.04266, over 3586503.80 frames. ], batch size: 39, lr: 7.14e-03, grad_scale: 4.0 2023-03-09 07:56:54,806 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.207e+02 2.979e+02 3.472e+02 4.235e+02 7.011e+02, threshold=6.944e+02, percent-clipped=2.0 2023-03-09 07:57:00,131 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=55566.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:57:01,341 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=55567.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 07:57:52,418 INFO [train.py:898] (3/4) Epoch 16, batch 1100, loss[loss=0.1551, simple_loss=0.2389, pruned_loss=0.03561, over 17661.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.261, pruned_loss=0.04259, over 3586621.85 frames. ], batch size: 39, lr: 7.13e-03, grad_scale: 4.0 2023-03-09 07:58:52,116 INFO [train.py:898] (3/4) Epoch 16, batch 1150, loss[loss=0.1621, simple_loss=0.2509, pruned_loss=0.03666, over 18358.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2606, pruned_loss=0.04211, over 3598551.28 frames. ], batch size: 46, lr: 7.13e-03, grad_scale: 4.0 2023-03-09 07:58:53,245 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.682e+02 3.124e+02 3.807e+02 4.865e+02 2.142e+03, threshold=7.614e+02, percent-clipped=11.0 2023-03-09 07:59:50,921 INFO [train.py:898] (3/4) Epoch 16, batch 1200, loss[loss=0.165, simple_loss=0.2523, pruned_loss=0.03882, over 18285.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2607, pruned_loss=0.04224, over 3589444.81 frames. ], batch size: 49, lr: 7.13e-03, grad_scale: 8.0 2023-03-09 08:00:35,031 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=55748.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:00:36,355 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.02 vs. limit=5.0 2023-03-09 08:00:48,846 INFO [train.py:898] (3/4) Epoch 16, batch 1250, loss[loss=0.1699, simple_loss=0.2502, pruned_loss=0.04481, over 18410.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2608, pruned_loss=0.04242, over 3595268.20 frames. ], batch size: 48, lr: 7.12e-03, grad_scale: 8.0 2023-03-09 08:00:49,971 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.937e+02 2.771e+02 3.226e+02 3.849e+02 7.150e+02, threshold=6.452e+02, percent-clipped=0.0 2023-03-09 08:01:31,530 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3584, 5.3990, 4.7932, 5.3197, 5.3046, 4.7222, 5.2015, 4.9207], device='cuda:3'), covar=tensor([0.0554, 0.0530, 0.1799, 0.0917, 0.0766, 0.0505, 0.0540, 0.1130], device='cuda:3'), in_proj_covar=tensor([0.0454, 0.0521, 0.0674, 0.0406, 0.0403, 0.0479, 0.0508, 0.0641], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 08:01:39,847 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=55804.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:01:45,638 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=55809.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:01:47,518 INFO [train.py:898] (3/4) Epoch 16, batch 1300, loss[loss=0.1473, simple_loss=0.2303, pruned_loss=0.03215, over 18258.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2613, pruned_loss=0.04261, over 3606777.18 frames. ], batch size: 45, lr: 7.12e-03, grad_scale: 8.0 2023-03-09 08:02:17,632 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6266, 2.1987, 2.5555, 2.5383, 3.1858, 4.8869, 4.6013, 3.4891], device='cuda:3'), covar=tensor([0.1569, 0.2304, 0.2821, 0.1787, 0.2250, 0.0157, 0.0379, 0.0768], device='cuda:3'), in_proj_covar=tensor([0.0275, 0.0331, 0.0357, 0.0265, 0.0381, 0.0219, 0.0283, 0.0234], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 08:02:30,148 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=55848.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:02:34,748 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=55852.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:02:44,961 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.53 vs. limit=2.0 2023-03-09 08:02:45,457 INFO [train.py:898] (3/4) Epoch 16, batch 1350, loss[loss=0.1665, simple_loss=0.2635, pruned_loss=0.03475, over 18546.00 frames. ], tot_loss[loss=0.173, simple_loss=0.261, pruned_loss=0.04247, over 3612638.82 frames. ], batch size: 54, lr: 7.12e-03, grad_scale: 8.0 2023-03-09 08:02:46,541 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.956e+02 2.894e+02 3.394e+02 4.145e+02 8.688e+02, threshold=6.789e+02, percent-clipped=2.0 2023-03-09 08:02:52,265 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=55867.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:03:42,134 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=55909.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:03:43,925 INFO [train.py:898] (3/4) Epoch 16, batch 1400, loss[loss=0.147, simple_loss=0.227, pruned_loss=0.03351, over 18510.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2609, pruned_loss=0.04259, over 3608019.58 frames. ], batch size: 44, lr: 7.11e-03, grad_scale: 8.0 2023-03-09 08:03:48,613 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=55915.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:04:42,844 INFO [train.py:898] (3/4) Epoch 16, batch 1450, loss[loss=0.1706, simple_loss=0.2583, pruned_loss=0.04143, over 18560.00 frames. ], tot_loss[loss=0.1727, simple_loss=0.2606, pruned_loss=0.04241, over 3596080.67 frames. ], batch size: 54, lr: 7.11e-03, grad_scale: 8.0 2023-03-09 08:04:43,989 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.940e+02 2.882e+02 3.410e+02 3.952e+02 8.442e+02, threshold=6.821e+02, percent-clipped=3.0 2023-03-09 08:05:46,795 INFO [train.py:898] (3/4) Epoch 16, batch 1500, loss[loss=0.182, simple_loss=0.2782, pruned_loss=0.04286, over 17819.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.2593, pruned_loss=0.0416, over 3602768.23 frames. ], batch size: 70, lr: 7.11e-03, grad_scale: 8.0 2023-03-09 08:05:48,243 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7615, 5.2746, 4.9031, 5.0847, 4.8800, 4.8168, 5.3667, 5.2620], device='cuda:3'), covar=tensor([0.1291, 0.0808, 0.0939, 0.0723, 0.1463, 0.0727, 0.0616, 0.0741], device='cuda:3'), in_proj_covar=tensor([0.0579, 0.0489, 0.0363, 0.0516, 0.0710, 0.0516, 0.0693, 0.0517], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 08:05:51,817 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6421, 3.5943, 4.9527, 4.2852, 3.1962, 3.0558, 4.3474, 5.1772], device='cuda:3'), covar=tensor([0.0839, 0.1566, 0.0167, 0.0376, 0.0921, 0.1134, 0.0411, 0.0248], device='cuda:3'), in_proj_covar=tensor([0.0143, 0.0264, 0.0130, 0.0175, 0.0185, 0.0185, 0.0186, 0.0176], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 08:06:12,372 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-09 08:06:43,870 INFO [train.py:898] (3/4) Epoch 16, batch 1550, loss[loss=0.2003, simple_loss=0.2867, pruned_loss=0.05698, over 17907.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.2605, pruned_loss=0.04225, over 3593347.95 frames. ], batch size: 65, lr: 7.10e-03, grad_scale: 8.0 2023-03-09 08:06:44,924 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.796e+02 2.823e+02 3.373e+02 3.914e+02 6.631e+02, threshold=6.746e+02, percent-clipped=0.0 2023-03-09 08:07:11,885 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5707, 6.1935, 5.5158, 6.0291, 5.7586, 5.6301, 6.2514, 6.1818], device='cuda:3'), covar=tensor([0.1143, 0.0598, 0.0465, 0.0581, 0.1199, 0.0653, 0.0496, 0.0586], device='cuda:3'), in_proj_covar=tensor([0.0576, 0.0487, 0.0363, 0.0514, 0.0709, 0.0515, 0.0691, 0.0516], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 08:07:20,924 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7097, 3.4725, 2.2890, 4.5075, 3.2617, 4.4670, 2.5534, 4.3269], device='cuda:3'), covar=tensor([0.0560, 0.0869, 0.1365, 0.0466, 0.0795, 0.0260, 0.1067, 0.0318], device='cuda:3'), in_proj_covar=tensor([0.0206, 0.0220, 0.0184, 0.0265, 0.0187, 0.0257, 0.0196, 0.0195], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 08:07:29,001 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6287, 4.7188, 4.7336, 4.5144, 4.4862, 4.5397, 4.8185, 4.8033], device='cuda:3'), covar=tensor([0.0078, 0.0068, 0.0058, 0.0100, 0.0074, 0.0136, 0.0068, 0.0137], device='cuda:3'), in_proj_covar=tensor([0.0090, 0.0065, 0.0069, 0.0089, 0.0072, 0.0099, 0.0084, 0.0084], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 08:07:33,282 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=56104.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:07:41,129 INFO [train.py:898] (3/4) Epoch 16, batch 1600, loss[loss=0.1436, simple_loss=0.224, pruned_loss=0.03156, over 18405.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2606, pruned_loss=0.04246, over 3602207.85 frames. ], batch size: 42, lr: 7.10e-03, grad_scale: 8.0 2023-03-09 08:07:54,535 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7079, 2.7451, 4.1505, 3.7594, 2.4999, 4.4360, 3.8876, 2.6589], device='cuda:3'), covar=tensor([0.0425, 0.1456, 0.0252, 0.0328, 0.1639, 0.0224, 0.0460, 0.1032], device='cuda:3'), in_proj_covar=tensor([0.0199, 0.0227, 0.0184, 0.0150, 0.0217, 0.0199, 0.0229, 0.0192], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 08:08:21,163 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=56145.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:08:39,550 INFO [train.py:898] (3/4) Epoch 16, batch 1650, loss[loss=0.1881, simple_loss=0.2727, pruned_loss=0.0517, over 15990.00 frames. ], tot_loss[loss=0.1721, simple_loss=0.2597, pruned_loss=0.04221, over 3602400.13 frames. ], batch size: 94, lr: 7.10e-03, grad_scale: 8.0 2023-03-09 08:08:40,628 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.176e+02 2.932e+02 3.714e+02 4.558e+02 1.092e+03, threshold=7.428e+02, percent-clipped=5.0 2023-03-09 08:09:30,608 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=56204.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:09:33,094 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=56206.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:09:38,404 INFO [train.py:898] (3/4) Epoch 16, batch 1700, loss[loss=0.1786, simple_loss=0.2696, pruned_loss=0.04377, over 18353.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.26, pruned_loss=0.04219, over 3600178.22 frames. ], batch size: 56, lr: 7.09e-03, grad_scale: 8.0 2023-03-09 08:09:42,141 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5266, 3.2929, 1.9758, 4.3530, 3.0267, 3.9486, 1.9841, 3.8076], device='cuda:3'), covar=tensor([0.0532, 0.0744, 0.1396, 0.0432, 0.0761, 0.0318, 0.1398, 0.0419], device='cuda:3'), in_proj_covar=tensor([0.0207, 0.0220, 0.0184, 0.0266, 0.0187, 0.0257, 0.0196, 0.0196], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 08:10:11,367 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7203, 4.4470, 4.6209, 3.4408, 3.7240, 3.3952, 2.8359, 2.4643], device='cuda:3'), covar=tensor([0.0216, 0.0144, 0.0072, 0.0293, 0.0377, 0.0244, 0.0671, 0.0931], device='cuda:3'), in_proj_covar=tensor([0.0065, 0.0053, 0.0056, 0.0064, 0.0086, 0.0062, 0.0075, 0.0082], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 08:10:36,638 INFO [train.py:898] (3/4) Epoch 16, batch 1750, loss[loss=0.1533, simple_loss=0.2384, pruned_loss=0.03404, over 18358.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2605, pruned_loss=0.04217, over 3600386.04 frames. ], batch size: 46, lr: 7.09e-03, grad_scale: 8.0 2023-03-09 08:10:37,715 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.941e+02 2.914e+02 3.484e+02 4.158e+02 1.053e+03, threshold=6.969e+02, percent-clipped=1.0 2023-03-09 08:11:36,013 INFO [train.py:898] (3/4) Epoch 16, batch 1800, loss[loss=0.1999, simple_loss=0.2968, pruned_loss=0.0515, over 16184.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2607, pruned_loss=0.0424, over 3597867.56 frames. ], batch size: 94, lr: 7.09e-03, grad_scale: 8.0 2023-03-09 08:12:27,602 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5254, 4.1550, 5.3606, 4.5492, 3.0551, 2.9604, 4.5696, 5.5415], device='cuda:3'), covar=tensor([0.0863, 0.1249, 0.0108, 0.0314, 0.0930, 0.1100, 0.0331, 0.0136], device='cuda:3'), in_proj_covar=tensor([0.0144, 0.0264, 0.0131, 0.0175, 0.0185, 0.0186, 0.0186, 0.0177], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 08:12:34,549 INFO [train.py:898] (3/4) Epoch 16, batch 1850, loss[loss=0.1541, simple_loss=0.2351, pruned_loss=0.03658, over 18163.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2597, pruned_loss=0.04229, over 3602241.64 frames. ], batch size: 44, lr: 7.09e-03, grad_scale: 8.0 2023-03-09 08:12:34,916 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9673, 5.1681, 2.5553, 5.0105, 4.8568, 5.2020, 4.9504, 2.4691], device='cuda:3'), covar=tensor([0.0178, 0.0074, 0.0804, 0.0089, 0.0071, 0.0059, 0.0096, 0.1001], device='cuda:3'), in_proj_covar=tensor([0.0081, 0.0075, 0.0090, 0.0087, 0.0079, 0.0069, 0.0079, 0.0092], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 08:12:35,491 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.821e+02 3.124e+02 3.834e+02 4.699e+02 1.584e+03, threshold=7.668e+02, percent-clipped=5.0 2023-03-09 08:13:01,798 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.49 vs. limit=5.0 2023-03-09 08:13:24,813 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=56404.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:13:32,350 INFO [train.py:898] (3/4) Epoch 16, batch 1900, loss[loss=0.2058, simple_loss=0.2865, pruned_loss=0.06257, over 12732.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2605, pruned_loss=0.04276, over 3597607.99 frames. ], batch size: 131, lr: 7.08e-03, grad_scale: 8.0 2023-03-09 08:14:19,086 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6111, 2.8642, 4.2226, 3.7079, 2.6262, 4.6073, 3.9390, 2.9988], device='cuda:3'), covar=tensor([0.0560, 0.1573, 0.0279, 0.0389, 0.1633, 0.0214, 0.0545, 0.0958], device='cuda:3'), in_proj_covar=tensor([0.0203, 0.0231, 0.0187, 0.0154, 0.0220, 0.0203, 0.0233, 0.0195], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 08:14:20,035 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=56452.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:14:20,680 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.60 vs. limit=5.0 2023-03-09 08:14:30,010 INFO [train.py:898] (3/4) Epoch 16, batch 1950, loss[loss=0.15, simple_loss=0.2477, pruned_loss=0.02619, over 18510.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2608, pruned_loss=0.04304, over 3593337.27 frames. ], batch size: 47, lr: 7.08e-03, grad_scale: 8.0 2023-03-09 08:14:31,034 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.810e+02 3.032e+02 3.372e+02 4.243e+02 1.557e+03, threshold=6.744e+02, percent-clipped=4.0 2023-03-09 08:14:54,567 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.15 vs. limit=5.0 2023-03-09 08:15:16,851 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=56501.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:15:20,406 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=56504.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:15:27,938 INFO [train.py:898] (3/4) Epoch 16, batch 2000, loss[loss=0.1766, simple_loss=0.2631, pruned_loss=0.0451, over 17729.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2607, pruned_loss=0.0429, over 3602174.78 frames. ], batch size: 70, lr: 7.08e-03, grad_scale: 8.0 2023-03-09 08:15:42,580 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=56523.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 08:15:42,730 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8213, 3.2079, 3.9305, 2.8252, 3.5592, 2.6888, 2.7648, 2.4004], device='cuda:3'), covar=tensor([0.0845, 0.0847, 0.0222, 0.0622, 0.0681, 0.1889, 0.2056, 0.1401], device='cuda:3'), in_proj_covar=tensor([0.0209, 0.0229, 0.0151, 0.0181, 0.0243, 0.0259, 0.0305, 0.0224], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 08:16:16,964 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=56552.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:16:26,971 INFO [train.py:898] (3/4) Epoch 16, batch 2050, loss[loss=0.1789, simple_loss=0.2583, pruned_loss=0.04978, over 18496.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2608, pruned_loss=0.04281, over 3593007.71 frames. ], batch size: 47, lr: 7.07e-03, grad_scale: 8.0 2023-03-09 08:16:28,116 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.109e+02 2.908e+02 3.297e+02 4.101e+02 7.290e+02, threshold=6.593e+02, percent-clipped=1.0 2023-03-09 08:16:54,703 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=56584.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 08:17:26,195 INFO [train.py:898] (3/4) Epoch 16, batch 2100, loss[loss=0.1627, simple_loss=0.2453, pruned_loss=0.04009, over 18489.00 frames. ], tot_loss[loss=0.1735, simple_loss=0.261, pruned_loss=0.04305, over 3563418.44 frames. ], batch size: 51, lr: 7.07e-03, grad_scale: 4.0 2023-03-09 08:18:11,139 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-09 08:18:14,534 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-09 08:18:25,143 INFO [train.py:898] (3/4) Epoch 16, batch 2150, loss[loss=0.1539, simple_loss=0.2437, pruned_loss=0.03207, over 18565.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2611, pruned_loss=0.04285, over 3572153.49 frames. ], batch size: 49, lr: 7.07e-03, grad_scale: 4.0 2023-03-09 08:18:27,229 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.074e+02 2.907e+02 3.397e+02 4.221e+02 7.037e+02, threshold=6.794e+02, percent-clipped=2.0 2023-03-09 08:19:23,205 INFO [train.py:898] (3/4) Epoch 16, batch 2200, loss[loss=0.17, simple_loss=0.2582, pruned_loss=0.04088, over 18310.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.262, pruned_loss=0.0435, over 3570422.68 frames. ], batch size: 49, lr: 7.06e-03, grad_scale: 4.0 2023-03-09 08:19:57,319 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.3911, 3.2901, 4.2998, 4.0220, 3.0162, 2.8689, 3.9229, 4.5248], device='cuda:3'), covar=tensor([0.0865, 0.1463, 0.0256, 0.0378, 0.0852, 0.1038, 0.0369, 0.0269], device='cuda:3'), in_proj_covar=tensor([0.0142, 0.0261, 0.0129, 0.0173, 0.0183, 0.0184, 0.0185, 0.0175], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 08:20:12,990 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4738, 2.8413, 2.4990, 2.9063, 3.5302, 3.5097, 3.0382, 2.9949], device='cuda:3'), covar=tensor([0.0180, 0.0263, 0.0573, 0.0354, 0.0190, 0.0158, 0.0372, 0.0350], device='cuda:3'), in_proj_covar=tensor([0.0129, 0.0125, 0.0158, 0.0148, 0.0117, 0.0105, 0.0147, 0.0143], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 08:20:21,590 INFO [train.py:898] (3/4) Epoch 16, batch 2250, loss[loss=0.1822, simple_loss=0.2722, pruned_loss=0.04613, over 18501.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2627, pruned_loss=0.04341, over 3574957.11 frames. ], batch size: 51, lr: 7.06e-03, grad_scale: 4.0 2023-03-09 08:20:23,718 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.871e+02 2.822e+02 3.228e+02 3.719e+02 7.082e+02, threshold=6.456e+02, percent-clipped=1.0 2023-03-09 08:21:09,231 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=56801.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:21:11,631 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=56803.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:21:12,694 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4359, 5.9305, 5.4994, 5.7110, 5.4190, 5.3878, 5.9959, 5.9332], device='cuda:3'), covar=tensor([0.1215, 0.0735, 0.0441, 0.0738, 0.1522, 0.0755, 0.0564, 0.0691], device='cuda:3'), in_proj_covar=tensor([0.0576, 0.0489, 0.0360, 0.0516, 0.0705, 0.0516, 0.0693, 0.0522], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 08:21:20,394 INFO [train.py:898] (3/4) Epoch 16, batch 2300, loss[loss=0.1467, simple_loss=0.2264, pruned_loss=0.03346, over 18378.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2622, pruned_loss=0.04311, over 3581069.27 frames. ], batch size: 43, lr: 7.06e-03, grad_scale: 4.0 2023-03-09 08:21:23,076 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7946, 4.4395, 4.5337, 3.4378, 3.8251, 3.5540, 2.9830, 2.6201], device='cuda:3'), covar=tensor([0.0209, 0.0168, 0.0083, 0.0273, 0.0268, 0.0187, 0.0563, 0.0822], device='cuda:3'), in_proj_covar=tensor([0.0065, 0.0053, 0.0056, 0.0064, 0.0084, 0.0062, 0.0074, 0.0081], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 08:21:38,786 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.03 vs. limit=5.0 2023-03-09 08:22:04,698 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=56849.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:22:18,548 INFO [train.py:898] (3/4) Epoch 16, batch 2350, loss[loss=0.148, simple_loss=0.2369, pruned_loss=0.02955, over 18507.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2622, pruned_loss=0.04316, over 3577176.18 frames. ], batch size: 47, lr: 7.05e-03, grad_scale: 4.0 2023-03-09 08:22:18,923 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9932, 5.2727, 2.9085, 5.1439, 4.9953, 5.3261, 5.0883, 2.8754], device='cuda:3'), covar=tensor([0.0181, 0.0073, 0.0680, 0.0074, 0.0067, 0.0071, 0.0102, 0.0883], device='cuda:3'), in_proj_covar=tensor([0.0082, 0.0075, 0.0090, 0.0087, 0.0080, 0.0070, 0.0080, 0.0092], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 08:22:20,836 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.203e+02 2.978e+02 3.446e+02 4.170e+02 7.854e+02, threshold=6.893e+02, percent-clipped=4.0 2023-03-09 08:22:22,383 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=56864.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:22:39,125 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=56879.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 08:23:09,937 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-09 08:23:13,975 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7423, 3.1343, 4.4679, 3.8037, 2.7836, 4.7765, 4.0335, 3.0834], device='cuda:3'), covar=tensor([0.0498, 0.1309, 0.0235, 0.0400, 0.1471, 0.0193, 0.0508, 0.0858], device='cuda:3'), in_proj_covar=tensor([0.0205, 0.0234, 0.0191, 0.0156, 0.0222, 0.0206, 0.0239, 0.0197], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 08:23:13,988 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=56908.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:23:17,006 INFO [train.py:898] (3/4) Epoch 16, batch 2400, loss[loss=0.1617, simple_loss=0.2574, pruned_loss=0.03296, over 18290.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2624, pruned_loss=0.04333, over 3572776.27 frames. ], batch size: 49, lr: 7.05e-03, grad_scale: 8.0 2023-03-09 08:23:24,506 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-09 08:24:15,260 INFO [train.py:898] (3/4) Epoch 16, batch 2450, loss[loss=0.1629, simple_loss=0.2582, pruned_loss=0.0338, over 18247.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2618, pruned_loss=0.04312, over 3578863.55 frames. ], batch size: 60, lr: 7.05e-03, grad_scale: 8.0 2023-03-09 08:24:17,552 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.256e+02 2.910e+02 3.453e+02 4.291e+02 1.108e+03, threshold=6.907e+02, percent-clipped=4.0 2023-03-09 08:24:24,735 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=56969.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:24:28,218 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1480, 4.2129, 2.5578, 4.1574, 5.3004, 2.6504, 3.9911, 3.9105], device='cuda:3'), covar=tensor([0.0099, 0.0991, 0.1408, 0.0551, 0.0043, 0.1117, 0.0580, 0.0690], device='cuda:3'), in_proj_covar=tensor([0.0146, 0.0255, 0.0196, 0.0190, 0.0105, 0.0178, 0.0210, 0.0217], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 08:24:52,060 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=56993.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:24:54,299 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=56995.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:25:11,505 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.37 vs. limit=5.0 2023-03-09 08:25:13,033 INFO [train.py:898] (3/4) Epoch 16, batch 2500, loss[loss=0.2077, simple_loss=0.284, pruned_loss=0.06573, over 13135.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2611, pruned_loss=0.04308, over 3581824.78 frames. ], batch size: 130, lr: 7.04e-03, grad_scale: 8.0 2023-03-09 08:25:33,927 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7657, 2.1900, 2.7380, 2.7991, 3.2539, 5.1153, 4.8708, 3.6526], device='cuda:3'), covar=tensor([0.1498, 0.2311, 0.2578, 0.1653, 0.2168, 0.0153, 0.0329, 0.0745], device='cuda:3'), in_proj_covar=tensor([0.0275, 0.0330, 0.0355, 0.0265, 0.0379, 0.0221, 0.0285, 0.0235], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 08:26:03,204 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57054.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:26:05,471 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57056.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:26:11,275 INFO [train.py:898] (3/4) Epoch 16, batch 2550, loss[loss=0.1766, simple_loss=0.2647, pruned_loss=0.04427, over 18373.00 frames. ], tot_loss[loss=0.1727, simple_loss=0.2601, pruned_loss=0.0426, over 3590400.64 frames. ], batch size: 55, lr: 7.04e-03, grad_scale: 8.0 2023-03-09 08:26:13,776 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.929e+02 2.789e+02 3.538e+02 4.534e+02 7.082e+02, threshold=7.077e+02, percent-clipped=1.0 2023-03-09 08:26:41,259 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9818, 4.9520, 4.6132, 4.8709, 4.8906, 4.3107, 4.8259, 4.6323], device='cuda:3'), covar=tensor([0.0371, 0.0475, 0.1250, 0.0759, 0.0565, 0.0450, 0.0416, 0.0895], device='cuda:3'), in_proj_covar=tensor([0.0454, 0.0513, 0.0667, 0.0411, 0.0406, 0.0474, 0.0503, 0.0634], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 08:27:09,955 INFO [train.py:898] (3/4) Epoch 16, batch 2600, loss[loss=0.1741, simple_loss=0.2568, pruned_loss=0.04571, over 18297.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.26, pruned_loss=0.04263, over 3592115.15 frames. ], batch size: 57, lr: 7.04e-03, grad_scale: 8.0 2023-03-09 08:28:05,926 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57159.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:28:07,954 INFO [train.py:898] (3/4) Epoch 16, batch 2650, loss[loss=0.1489, simple_loss=0.2312, pruned_loss=0.0333, over 18522.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2604, pruned_loss=0.04279, over 3591861.99 frames. ], batch size: 44, lr: 7.04e-03, grad_scale: 4.0 2023-03-09 08:28:11,698 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.853e+02 2.763e+02 3.335e+02 4.015e+02 1.057e+03, threshold=6.669e+02, percent-clipped=2.0 2023-03-09 08:28:29,466 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57179.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 08:28:37,470 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57186.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:28:49,738 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57197.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:29:06,035 INFO [train.py:898] (3/4) Epoch 16, batch 2700, loss[loss=0.2043, simple_loss=0.2922, pruned_loss=0.05817, over 18086.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2605, pruned_loss=0.0429, over 3589666.99 frames. ], batch size: 62, lr: 7.03e-03, grad_scale: 4.0 2023-03-09 08:29:25,742 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=57227.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 08:29:44,013 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8518, 4.0645, 2.2065, 3.8888, 5.1318, 2.5344, 3.7485, 3.8116], device='cuda:3'), covar=tensor([0.0168, 0.1252, 0.1885, 0.0700, 0.0074, 0.1302, 0.0779, 0.0798], device='cuda:3'), in_proj_covar=tensor([0.0148, 0.0261, 0.0200, 0.0194, 0.0107, 0.0183, 0.0214, 0.0223], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 08:29:44,869 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57244.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:29:48,291 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57247.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:30:01,257 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57258.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 08:30:04,240 INFO [train.py:898] (3/4) Epoch 16, batch 2750, loss[loss=0.1578, simple_loss=0.2417, pruned_loss=0.03696, over 18541.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2603, pruned_loss=0.0428, over 3592167.91 frames. ], batch size: 49, lr: 7.03e-03, grad_scale: 4.0 2023-03-09 08:30:08,180 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.113e+02 2.970e+02 3.336e+02 3.947e+02 1.031e+03, threshold=6.671e+02, percent-clipped=3.0 2023-03-09 08:30:08,381 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57264.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:30:46,687 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7058, 3.3441, 4.7349, 2.8328, 4.2226, 2.5373, 2.8206, 1.9845], device='cuda:3'), covar=tensor([0.1099, 0.1006, 0.0177, 0.0784, 0.0496, 0.2390, 0.2755, 0.2031], device='cuda:3'), in_proj_covar=tensor([0.0210, 0.0232, 0.0154, 0.0184, 0.0245, 0.0259, 0.0308, 0.0226], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 08:30:55,688 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57305.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:31:02,569 INFO [train.py:898] (3/4) Epoch 16, batch 2800, loss[loss=0.1691, simple_loss=0.2632, pruned_loss=0.03751, over 18491.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2609, pruned_loss=0.04299, over 3585033.08 frames. ], batch size: 53, lr: 7.03e-03, grad_scale: 8.0 2023-03-09 08:31:47,093 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57349.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:31:49,417 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57351.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:32:00,270 INFO [train.py:898] (3/4) Epoch 16, batch 2850, loss[loss=0.1644, simple_loss=0.2513, pruned_loss=0.03874, over 18384.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2607, pruned_loss=0.04263, over 3593049.05 frames. ], batch size: 50, lr: 7.02e-03, grad_scale: 8.0 2023-03-09 08:32:04,038 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.919e+02 2.832e+02 3.405e+02 3.993e+02 9.421e+02, threshold=6.810e+02, percent-clipped=3.0 2023-03-09 08:32:44,119 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57398.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:32:58,702 INFO [train.py:898] (3/4) Epoch 16, batch 2900, loss[loss=0.171, simple_loss=0.27, pruned_loss=0.03601, over 18493.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2608, pruned_loss=0.0426, over 3585175.87 frames. ], batch size: 53, lr: 7.02e-03, grad_scale: 8.0 2023-03-09 08:33:04,050 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.86 vs. limit=5.0 2023-03-09 08:33:55,270 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57459.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:33:55,399 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57459.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:33:57,204 INFO [train.py:898] (3/4) Epoch 16, batch 2950, loss[loss=0.1865, simple_loss=0.2765, pruned_loss=0.04826, over 18013.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2611, pruned_loss=0.04245, over 3588565.17 frames. ], batch size: 65, lr: 7.02e-03, grad_scale: 8.0 2023-03-09 08:33:58,707 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3338, 5.8106, 5.3521, 5.5684, 5.4306, 5.2527, 5.8813, 5.8080], device='cuda:3'), covar=tensor([0.1022, 0.0804, 0.0531, 0.0805, 0.1311, 0.0668, 0.0609, 0.0761], device='cuda:3'), in_proj_covar=tensor([0.0571, 0.0489, 0.0362, 0.0513, 0.0706, 0.0513, 0.0688, 0.0521], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 08:33:58,787 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57462.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:34:00,796 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.919e+02 2.681e+02 3.202e+02 3.928e+02 6.706e+02, threshold=6.405e+02, percent-clipped=1.0 2023-03-09 08:34:14,485 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8287, 4.8578, 4.2383, 4.7797, 4.8273, 4.2472, 4.7023, 4.4611], device='cuda:3'), covar=tensor([0.0613, 0.0654, 0.2078, 0.0942, 0.0691, 0.0580, 0.0597, 0.1207], device='cuda:3'), in_proj_covar=tensor([0.0457, 0.0518, 0.0673, 0.0414, 0.0406, 0.0475, 0.0504, 0.0639], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 08:34:17,984 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7877, 2.9532, 2.5999, 3.0328, 3.7102, 3.7349, 3.2152, 3.1789], device='cuda:3'), covar=tensor([0.0174, 0.0276, 0.0569, 0.0316, 0.0186, 0.0171, 0.0321, 0.0325], device='cuda:3'), in_proj_covar=tensor([0.0129, 0.0127, 0.0159, 0.0148, 0.0118, 0.0105, 0.0147, 0.0145], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 08:34:38,269 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57495.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:34:51,826 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=57507.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:34:56,201 INFO [train.py:898] (3/4) Epoch 16, batch 3000, loss[loss=0.1798, simple_loss=0.2587, pruned_loss=0.05044, over 18504.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2607, pruned_loss=0.04246, over 3582271.81 frames. ], batch size: 44, lr: 7.01e-03, grad_scale: 8.0 2023-03-09 08:34:56,201 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 08:35:06,811 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0138, 2.9044, 1.9311, 3.5335, 2.4952, 3.2460, 2.0634, 3.0496], device='cuda:3'), covar=tensor([0.0712, 0.0969, 0.1411, 0.0524, 0.0941, 0.0300, 0.1309, 0.0437], device='cuda:3'), in_proj_covar=tensor([0.0205, 0.0219, 0.0183, 0.0264, 0.0185, 0.0258, 0.0197, 0.0193], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 08:35:08,108 INFO [train.py:932] (3/4) Epoch 16, validation: loss=0.1522, simple_loss=0.2529, pruned_loss=0.02576, over 944034.00 frames. 2023-03-09 08:35:08,108 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 08:35:14,824 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5672, 3.3942, 2.4120, 4.3407, 3.0399, 4.2803, 2.4573, 3.9691], device='cuda:3'), covar=tensor([0.0591, 0.0789, 0.1229, 0.0449, 0.0750, 0.0308, 0.1064, 0.0371], device='cuda:3'), in_proj_covar=tensor([0.0204, 0.0218, 0.0182, 0.0264, 0.0185, 0.0258, 0.0196, 0.0193], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 08:35:23,918 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57523.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:35:37,320 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3153, 4.3132, 2.7369, 4.3684, 5.4232, 2.7834, 4.2883, 4.1485], device='cuda:3'), covar=tensor([0.0107, 0.1342, 0.1472, 0.0521, 0.0056, 0.1145, 0.0503, 0.0683], device='cuda:3'), in_proj_covar=tensor([0.0145, 0.0256, 0.0197, 0.0190, 0.0106, 0.0179, 0.0210, 0.0217], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 08:35:44,951 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57542.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:35:57,288 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57553.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 08:36:00,816 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57556.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:36:05,943 INFO [train.py:898] (3/4) Epoch 16, batch 3050, loss[loss=0.214, simple_loss=0.2877, pruned_loss=0.07013, over 12306.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2607, pruned_loss=0.04238, over 3584504.90 frames. ], batch size: 129, lr: 7.01e-03, grad_scale: 8.0 2023-03-09 08:36:09,983 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.073e+02 2.813e+02 3.441e+02 4.204e+02 1.352e+03, threshold=6.882e+02, percent-clipped=6.0 2023-03-09 08:36:10,948 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57564.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:36:52,895 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57600.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:37:05,424 INFO [train.py:898] (3/4) Epoch 16, batch 3100, loss[loss=0.1462, simple_loss=0.2317, pruned_loss=0.03034, over 17711.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2617, pruned_loss=0.0427, over 3579504.88 frames. ], batch size: 39, lr: 7.01e-03, grad_scale: 8.0 2023-03-09 08:37:06,733 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=57612.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:37:38,437 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.95 vs. limit=2.0 2023-03-09 08:37:51,342 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57649.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:37:53,493 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57651.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:38:04,647 INFO [train.py:898] (3/4) Epoch 16, batch 3150, loss[loss=0.1596, simple_loss=0.2537, pruned_loss=0.03272, over 18475.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2611, pruned_loss=0.0423, over 3588234.51 frames. ], batch size: 51, lr: 7.01e-03, grad_scale: 8.0 2023-03-09 08:38:08,016 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.831e+02 2.982e+02 3.518e+02 4.061e+02 7.623e+02, threshold=7.037e+02, percent-clipped=2.0 2023-03-09 08:38:48,029 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=57697.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:38:50,423 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=57699.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:39:03,754 INFO [train.py:898] (3/4) Epoch 16, batch 3200, loss[loss=0.2086, simple_loss=0.2838, pruned_loss=0.06669, over 12977.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.261, pruned_loss=0.04235, over 3588850.78 frames. ], batch size: 130, lr: 7.00e-03, grad_scale: 8.0 2023-03-09 08:39:30,517 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57733.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:39:54,281 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57754.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:40:02,254 INFO [train.py:898] (3/4) Epoch 16, batch 3250, loss[loss=0.1425, simple_loss=0.2232, pruned_loss=0.03087, over 18456.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2608, pruned_loss=0.04235, over 3589802.63 frames. ], batch size: 43, lr: 7.00e-03, grad_scale: 8.0 2023-03-09 08:40:05,686 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.974e+02 2.787e+02 3.348e+02 4.125e+02 7.388e+02, threshold=6.695e+02, percent-clipped=1.0 2023-03-09 08:40:33,239 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5944, 2.8867, 2.4589, 2.9765, 3.5931, 3.5733, 3.2045, 2.9765], device='cuda:3'), covar=tensor([0.0196, 0.0298, 0.0641, 0.0348, 0.0177, 0.0167, 0.0348, 0.0367], device='cuda:3'), in_proj_covar=tensor([0.0128, 0.0127, 0.0160, 0.0149, 0.0117, 0.0105, 0.0146, 0.0145], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 08:40:41,835 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57794.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:40:46,415 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0729, 3.8009, 5.0933, 2.9510, 4.4139, 2.7649, 3.1654, 1.9160], device='cuda:3'), covar=tensor([0.0974, 0.0818, 0.0115, 0.0824, 0.0550, 0.2233, 0.2400, 0.2003], device='cuda:3'), in_proj_covar=tensor([0.0211, 0.0233, 0.0155, 0.0185, 0.0246, 0.0259, 0.0308, 0.0226], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 08:40:50,388 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-09 08:41:00,955 INFO [train.py:898] (3/4) Epoch 16, batch 3300, loss[loss=0.1635, simple_loss=0.2508, pruned_loss=0.03803, over 18637.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.261, pruned_loss=0.0427, over 3583811.87 frames. ], batch size: 52, lr: 7.00e-03, grad_scale: 8.0 2023-03-09 08:41:09,092 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57818.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:41:35,600 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9013, 2.4345, 2.1427, 2.4650, 2.9357, 2.9323, 2.7002, 2.5048], device='cuda:3'), covar=tensor([0.0162, 0.0233, 0.0530, 0.0325, 0.0171, 0.0162, 0.0339, 0.0351], device='cuda:3'), in_proj_covar=tensor([0.0129, 0.0127, 0.0160, 0.0149, 0.0117, 0.0105, 0.0147, 0.0145], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 08:41:37,691 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57842.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:41:37,880 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6247, 3.9768, 2.2704, 3.8653, 4.7951, 2.3019, 3.5815, 3.7088], device='cuda:3'), covar=tensor([0.0187, 0.1040, 0.1896, 0.0632, 0.0091, 0.1605, 0.0744, 0.0732], device='cuda:3'), in_proj_covar=tensor([0.0147, 0.0260, 0.0199, 0.0192, 0.0108, 0.0181, 0.0213, 0.0220], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 08:41:47,832 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57851.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:41:50,197 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57853.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 08:41:58,588 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.64 vs. limit=2.0 2023-03-09 08:41:59,085 INFO [train.py:898] (3/4) Epoch 16, batch 3350, loss[loss=0.1505, simple_loss=0.2318, pruned_loss=0.0346, over 18156.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2611, pruned_loss=0.04249, over 3582589.85 frames. ], batch size: 44, lr: 6.99e-03, grad_scale: 8.0 2023-03-09 08:42:02,097 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.50 vs. limit=2.0 2023-03-09 08:42:02,565 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.007e+02 2.795e+02 3.328e+02 4.480e+02 9.325e+02, threshold=6.655e+02, percent-clipped=2.0 2023-03-09 08:42:33,259 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=57890.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:42:44,966 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57900.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:42:46,000 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=57901.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:42:57,109 INFO [train.py:898] (3/4) Epoch 16, batch 3400, loss[loss=0.1372, simple_loss=0.2222, pruned_loss=0.02614, over 18382.00 frames. ], tot_loss[loss=0.1735, simple_loss=0.2616, pruned_loss=0.04266, over 3586336.38 frames. ], batch size: 42, lr: 6.99e-03, grad_scale: 8.0 2023-03-09 08:43:13,370 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.64 vs. limit=5.0 2023-03-09 08:43:41,004 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=57948.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:43:46,134 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-09 08:43:55,561 INFO [train.py:898] (3/4) Epoch 16, batch 3450, loss[loss=0.1541, simple_loss=0.239, pruned_loss=0.03463, over 18425.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2603, pruned_loss=0.04221, over 3595289.17 frames. ], batch size: 42, lr: 6.99e-03, grad_scale: 8.0 2023-03-09 08:43:58,788 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.652e+02 2.617e+02 3.263e+02 4.009e+02 9.440e+02, threshold=6.526e+02, percent-clipped=3.0 2023-03-09 08:44:59,167 INFO [train.py:898] (3/4) Epoch 16, batch 3500, loss[loss=0.2054, simple_loss=0.2909, pruned_loss=0.05994, over 18277.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2613, pruned_loss=0.04275, over 3583671.36 frames. ], batch size: 57, lr: 6.98e-03, grad_scale: 8.0 2023-03-09 08:45:38,397 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8106, 4.4274, 4.6406, 3.4220, 3.7246, 3.5266, 2.5173, 2.4089], device='cuda:3'), covar=tensor([0.0222, 0.0182, 0.0065, 0.0267, 0.0332, 0.0201, 0.0710, 0.0910], device='cuda:3'), in_proj_covar=tensor([0.0066, 0.0054, 0.0058, 0.0065, 0.0087, 0.0062, 0.0075, 0.0082], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 08:45:48,080 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=58054.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:45:55,319 INFO [train.py:898] (3/4) Epoch 16, batch 3550, loss[loss=0.1832, simple_loss=0.2732, pruned_loss=0.04656, over 17699.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2608, pruned_loss=0.04255, over 3582638.05 frames. ], batch size: 70, lr: 6.98e-03, grad_scale: 8.0 2023-03-09 08:45:58,563 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.098e+02 2.999e+02 3.554e+02 4.304e+02 1.121e+03, threshold=7.108e+02, percent-clipped=3.0 2023-03-09 08:46:26,739 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=58089.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:46:40,551 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=58102.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:46:46,286 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4939, 2.1527, 2.3848, 2.3918, 2.8150, 4.4681, 4.2052, 3.3528], device='cuda:3'), covar=tensor([0.1852, 0.2935, 0.3289, 0.1972, 0.2986, 0.0271, 0.0491, 0.0832], device='cuda:3'), in_proj_covar=tensor([0.0277, 0.0331, 0.0360, 0.0265, 0.0379, 0.0220, 0.0284, 0.0236], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 08:46:47,344 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7500, 2.3444, 2.7120, 2.6752, 3.3056, 5.0237, 4.7823, 3.5650], device='cuda:3'), covar=tensor([0.1519, 0.2184, 0.2721, 0.1606, 0.2063, 0.0154, 0.0332, 0.0742], device='cuda:3'), in_proj_covar=tensor([0.0277, 0.0331, 0.0359, 0.0265, 0.0378, 0.0220, 0.0284, 0.0236], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 08:46:50,129 INFO [train.py:898] (3/4) Epoch 16, batch 3600, loss[loss=0.1583, simple_loss=0.2444, pruned_loss=0.03611, over 18364.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2612, pruned_loss=0.04251, over 3584120.66 frames. ], batch size: 46, lr: 6.98e-03, grad_scale: 8.0 2023-03-09 08:46:58,073 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=58118.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:47:21,707 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=58141.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:47:54,103 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=58144.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:47:54,867 INFO [train.py:898] (3/4) Epoch 17, batch 0, loss[loss=0.1863, simple_loss=0.2815, pruned_loss=0.04552, over 18352.00 frames. ], tot_loss[loss=0.1863, simple_loss=0.2815, pruned_loss=0.04552, over 18352.00 frames. ], batch size: 56, lr: 6.77e-03, grad_scale: 8.0 2023-03-09 08:47:54,867 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 08:48:01,080 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.3953, 2.0859, 2.3514, 2.4079, 2.8357, 4.1833, 3.9641, 2.8338], device='cuda:3'), covar=tensor([0.1861, 0.2570, 0.3012, 0.1983, 0.2413, 0.0271, 0.0490, 0.0997], device='cuda:3'), in_proj_covar=tensor([0.0276, 0.0330, 0.0356, 0.0264, 0.0376, 0.0218, 0.0282, 0.0234], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 08:48:01,291 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.7908, 6.1566, 5.6887, 6.0256, 5.7981, 5.6929, 6.2212, 6.2089], device='cuda:3'), covar=tensor([0.1056, 0.0699, 0.0278, 0.0631, 0.1256, 0.0654, 0.0475, 0.0641], device='cuda:3'), in_proj_covar=tensor([0.0578, 0.0495, 0.0361, 0.0524, 0.0711, 0.0516, 0.0696, 0.0525], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 08:48:03,257 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4182, 2.8526, 2.6930, 2.8859, 3.4902, 3.5144, 3.1143, 3.0118], device='cuda:3'), covar=tensor([0.0211, 0.0288, 0.0524, 0.0430, 0.0190, 0.0142, 0.0394, 0.0393], device='cuda:3'), in_proj_covar=tensor([0.0129, 0.0127, 0.0158, 0.0149, 0.0115, 0.0103, 0.0146, 0.0145], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 08:48:06,679 INFO [train.py:932] (3/4) Epoch 17, validation: loss=0.1527, simple_loss=0.2537, pruned_loss=0.02582, over 944034.00 frames. 2023-03-09 08:48:06,680 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 08:48:13,784 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=58151.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:48:30,194 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.965e+02 2.893e+02 3.453e+02 4.374e+02 8.967e+02, threshold=6.906e+02, percent-clipped=3.0 2023-03-09 08:48:32,615 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=58166.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:49:05,026 INFO [train.py:898] (3/4) Epoch 17, batch 50, loss[loss=0.1671, simple_loss=0.2643, pruned_loss=0.03494, over 18397.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2607, pruned_loss=0.04285, over 812637.37 frames. ], batch size: 52, lr: 6.76e-03, grad_scale: 8.0 2023-03-09 08:49:09,576 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=58199.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:49:13,571 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=58202.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:49:16,953 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=58205.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:50:03,222 INFO [train.py:898] (3/4) Epoch 17, batch 100, loss[loss=0.1755, simple_loss=0.2658, pruned_loss=0.04263, over 18230.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.262, pruned_loss=0.04286, over 1418713.03 frames. ], batch size: 60, lr: 6.76e-03, grad_scale: 8.0 2023-03-09 08:50:26,005 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.069e+02 2.900e+02 3.440e+02 4.043e+02 9.296e+02, threshold=6.881e+02, percent-clipped=1.0 2023-03-09 08:51:02,237 INFO [train.py:898] (3/4) Epoch 17, batch 150, loss[loss=0.1826, simple_loss=0.2774, pruned_loss=0.04384, over 18135.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2621, pruned_loss=0.04319, over 1877574.49 frames. ], batch size: 62, lr: 6.76e-03, grad_scale: 8.0 2023-03-09 08:51:03,847 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7890, 2.9715, 2.8604, 3.0388, 3.7525, 3.7709, 3.3106, 3.1184], device='cuda:3'), covar=tensor([0.0181, 0.0246, 0.0492, 0.0324, 0.0160, 0.0128, 0.0318, 0.0387], device='cuda:3'), in_proj_covar=tensor([0.0127, 0.0125, 0.0157, 0.0147, 0.0115, 0.0103, 0.0144, 0.0143], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 08:51:48,221 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.52 vs. limit=2.0 2023-03-09 08:51:48,256 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-09 08:52:01,272 INFO [train.py:898] (3/4) Epoch 17, batch 200, loss[loss=0.1706, simple_loss=0.2491, pruned_loss=0.04607, over 18161.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2611, pruned_loss=0.04281, over 2259862.28 frames. ], batch size: 44, lr: 6.75e-03, grad_scale: 8.0 2023-03-09 08:52:11,860 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9387, 4.5573, 4.7330, 3.5147, 3.9422, 3.6765, 2.7924, 2.5628], device='cuda:3'), covar=tensor([0.0192, 0.0154, 0.0077, 0.0274, 0.0324, 0.0179, 0.0668, 0.0852], device='cuda:3'), in_proj_covar=tensor([0.0066, 0.0054, 0.0058, 0.0065, 0.0086, 0.0062, 0.0075, 0.0082], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 08:52:22,849 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.674e+02 2.879e+02 3.254e+02 3.939e+02 7.173e+02, threshold=6.508e+02, percent-clipped=1.0 2023-03-09 08:52:53,406 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=58389.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:52:59,835 INFO [train.py:898] (3/4) Epoch 17, batch 250, loss[loss=0.1713, simple_loss=0.262, pruned_loss=0.0403, over 18345.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2606, pruned_loss=0.04246, over 2565051.85 frames. ], batch size: 55, lr: 6.75e-03, grad_scale: 8.0 2023-03-09 08:53:50,113 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=58437.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:53:59,032 INFO [train.py:898] (3/4) Epoch 17, batch 300, loss[loss=0.1848, simple_loss=0.2761, pruned_loss=0.04676, over 18305.00 frames. ], tot_loss[loss=0.1715, simple_loss=0.2593, pruned_loss=0.04184, over 2789372.54 frames. ], batch size: 54, lr: 6.75e-03, grad_scale: 8.0 2023-03-09 08:54:20,579 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.698e+02 2.907e+02 3.543e+02 4.450e+02 1.655e+03, threshold=7.087e+02, percent-clipped=7.0 2023-03-09 08:54:57,846 INFO [train.py:898] (3/4) Epoch 17, batch 350, loss[loss=0.153, simple_loss=0.2345, pruned_loss=0.03572, over 18499.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2579, pruned_loss=0.0413, over 2965851.64 frames. ], batch size: 44, lr: 6.75e-03, grad_scale: 8.0 2023-03-09 08:55:00,142 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=58497.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:55:00,266 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8773, 5.3290, 2.8761, 5.1556, 5.0901, 5.3887, 5.1265, 2.6623], device='cuda:3'), covar=tensor([0.0208, 0.0068, 0.0782, 0.0079, 0.0068, 0.0069, 0.0092, 0.1029], device='cuda:3'), in_proj_covar=tensor([0.0083, 0.0076, 0.0092, 0.0089, 0.0082, 0.0071, 0.0081, 0.0094], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 08:55:03,536 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=58500.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:55:46,446 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5497, 2.1436, 2.5621, 2.6294, 3.1413, 4.7313, 4.4524, 3.6509], device='cuda:3'), covar=tensor([0.1626, 0.2452, 0.2650, 0.1744, 0.2160, 0.0206, 0.0421, 0.0718], device='cuda:3'), in_proj_covar=tensor([0.0277, 0.0330, 0.0358, 0.0266, 0.0379, 0.0219, 0.0285, 0.0236], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 08:55:56,653 INFO [train.py:898] (3/4) Epoch 17, batch 400, loss[loss=0.1705, simple_loss=0.2635, pruned_loss=0.03874, over 17087.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2579, pruned_loss=0.04121, over 3115593.65 frames. ], batch size: 78, lr: 6.74e-03, grad_scale: 8.0 2023-03-09 08:56:18,009 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.966e+02 2.743e+02 3.133e+02 4.342e+02 9.924e+02, threshold=6.265e+02, percent-clipped=3.0 2023-03-09 08:56:54,599 INFO [train.py:898] (3/4) Epoch 17, batch 450, loss[loss=0.176, simple_loss=0.2655, pruned_loss=0.04322, over 18387.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.259, pruned_loss=0.0416, over 3215669.69 frames. ], batch size: 50, lr: 6.74e-03, grad_scale: 8.0 2023-03-09 08:57:24,862 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.58 vs. limit=2.0 2023-03-09 08:57:52,490 INFO [train.py:898] (3/4) Epoch 17, batch 500, loss[loss=0.1893, simple_loss=0.281, pruned_loss=0.04882, over 17046.00 frames. ], tot_loss[loss=0.171, simple_loss=0.259, pruned_loss=0.04151, over 3304883.18 frames. ], batch size: 78, lr: 6.74e-03, grad_scale: 8.0 2023-03-09 08:58:13,805 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.815e+02 2.817e+02 3.097e+02 3.778e+02 7.071e+02, threshold=6.194e+02, percent-clipped=1.0 2023-03-09 08:58:33,060 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=58681.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:58:35,756 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.49 vs. limit=5.0 2023-03-09 08:58:49,767 INFO [train.py:898] (3/4) Epoch 17, batch 550, loss[loss=0.16, simple_loss=0.2457, pruned_loss=0.03715, over 18554.00 frames. ], tot_loss[loss=0.1715, simple_loss=0.2593, pruned_loss=0.04185, over 3364114.44 frames. ], batch size: 49, lr: 6.73e-03, grad_scale: 8.0 2023-03-09 08:59:38,002 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-09 08:59:44,809 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=58742.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 08:59:47,845 INFO [train.py:898] (3/4) Epoch 17, batch 600, loss[loss=0.1964, simple_loss=0.2874, pruned_loss=0.05275, over 18337.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.2594, pruned_loss=0.04151, over 3424371.17 frames. ], batch size: 56, lr: 6.73e-03, grad_scale: 8.0 2023-03-09 09:00:09,618 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.181e+02 2.820e+02 3.241e+02 3.843e+02 6.469e+02, threshold=6.481e+02, percent-clipped=2.0 2023-03-09 09:00:22,467 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=58775.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:00:29,431 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4652, 2.8533, 2.2050, 2.8754, 3.4719, 3.4728, 3.1035, 3.0023], device='cuda:3'), covar=tensor([0.0173, 0.0229, 0.0668, 0.0321, 0.0145, 0.0135, 0.0280, 0.0290], device='cuda:3'), in_proj_covar=tensor([0.0130, 0.0128, 0.0160, 0.0149, 0.0118, 0.0105, 0.0146, 0.0143], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 09:00:45,233 INFO [train.py:898] (3/4) Epoch 17, batch 650, loss[loss=0.1695, simple_loss=0.2665, pruned_loss=0.03628, over 18502.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2592, pruned_loss=0.04123, over 3463301.57 frames. ], batch size: 51, lr: 6.73e-03, grad_scale: 8.0 2023-03-09 09:00:47,593 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.95 vs. limit=2.0 2023-03-09 09:00:48,311 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=58797.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:00:52,610 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=58800.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:00:58,591 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.77 vs. limit=2.0 2023-03-09 09:01:06,081 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7967, 4.4667, 4.5227, 3.2990, 3.6389, 3.4806, 2.6248, 2.3554], device='cuda:3'), covar=tensor([0.0193, 0.0140, 0.0065, 0.0296, 0.0315, 0.0217, 0.0620, 0.0818], device='cuda:3'), in_proj_covar=tensor([0.0066, 0.0054, 0.0058, 0.0065, 0.0086, 0.0062, 0.0074, 0.0082], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 09:01:34,599 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=58836.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:01:44,383 INFO [train.py:898] (3/4) Epoch 17, batch 700, loss[loss=0.1525, simple_loss=0.2468, pruned_loss=0.02913, over 18262.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2591, pruned_loss=0.04127, over 3476716.89 frames. ], batch size: 47, lr: 6.73e-03, grad_scale: 8.0 2023-03-09 09:01:44,610 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=58845.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:01:46,051 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.31 vs. limit=5.0 2023-03-09 09:01:48,449 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=58848.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:02:07,829 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.002e+02 2.825e+02 3.273e+02 3.706e+02 6.863e+02, threshold=6.547e+02, percent-clipped=2.0 2023-03-09 09:02:42,570 INFO [train.py:898] (3/4) Epoch 17, batch 750, loss[loss=0.1614, simple_loss=0.2448, pruned_loss=0.03905, over 18507.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2589, pruned_loss=0.0414, over 3496038.60 frames. ], batch size: 47, lr: 6.72e-03, grad_scale: 8.0 2023-03-09 09:02:42,988 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6067, 2.5159, 4.3158, 3.8209, 2.2725, 4.4863, 3.8503, 2.6210], device='cuda:3'), covar=tensor([0.0456, 0.1914, 0.0256, 0.0334, 0.2048, 0.0269, 0.0472, 0.1280], device='cuda:3'), in_proj_covar=tensor([0.0203, 0.0233, 0.0193, 0.0153, 0.0221, 0.0204, 0.0235, 0.0193], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 09:02:56,546 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=58906.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:03:04,435 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5930, 2.3355, 2.6502, 2.7003, 3.1608, 5.0577, 4.9088, 3.9034], device='cuda:3'), covar=tensor([0.1652, 0.2317, 0.2983, 0.1716, 0.2407, 0.0176, 0.0312, 0.0676], device='cuda:3'), in_proj_covar=tensor([0.0276, 0.0330, 0.0360, 0.0265, 0.0378, 0.0220, 0.0284, 0.0236], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 09:03:40,648 INFO [train.py:898] (3/4) Epoch 17, batch 800, loss[loss=0.1635, simple_loss=0.2558, pruned_loss=0.03559, over 18379.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2599, pruned_loss=0.0417, over 3525057.58 frames. ], batch size: 50, lr: 6.72e-03, grad_scale: 8.0 2023-03-09 09:04:04,577 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.057e+02 2.797e+02 3.307e+02 3.815e+02 9.263e+02, threshold=6.613e+02, percent-clipped=2.0 2023-03-09 09:04:08,302 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=58967.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:04:20,996 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.75 vs. limit=2.0 2023-03-09 09:04:29,984 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3481, 5.9242, 5.4132, 5.6848, 5.5090, 5.3153, 5.9330, 5.9264], device='cuda:3'), covar=tensor([0.1168, 0.0621, 0.0468, 0.0674, 0.1171, 0.0711, 0.0524, 0.0575], device='cuda:3'), in_proj_covar=tensor([0.0587, 0.0502, 0.0366, 0.0530, 0.0720, 0.0525, 0.0710, 0.0533], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 09:04:38,750 INFO [train.py:898] (3/4) Epoch 17, batch 850, loss[loss=0.1463, simple_loss=0.2306, pruned_loss=0.03102, over 18431.00 frames. ], tot_loss[loss=0.1721, simple_loss=0.2602, pruned_loss=0.042, over 3544302.63 frames. ], batch size: 43, lr: 6.72e-03, grad_scale: 8.0 2023-03-09 09:04:40,755 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.56 vs. limit=5.0 2023-03-09 09:04:42,486 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7944, 5.3337, 5.3274, 5.2969, 4.8445, 5.2105, 4.6072, 5.1833], device='cuda:3'), covar=tensor([0.0242, 0.0241, 0.0189, 0.0409, 0.0404, 0.0227, 0.1093, 0.0315], device='cuda:3'), in_proj_covar=tensor([0.0203, 0.0248, 0.0240, 0.0305, 0.0257, 0.0251, 0.0300, 0.0244], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 09:05:28,792 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=59037.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:05:37,754 INFO [train.py:898] (3/4) Epoch 17, batch 900, loss[loss=0.1556, simple_loss=0.2427, pruned_loss=0.03429, over 18283.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2604, pruned_loss=0.04178, over 3562754.54 frames. ], batch size: 49, lr: 6.71e-03, grad_scale: 8.0 2023-03-09 09:05:59,616 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.891e+02 2.805e+02 3.225e+02 3.970e+02 5.951e+02, threshold=6.451e+02, percent-clipped=0.0 2023-03-09 09:06:36,336 INFO [train.py:898] (3/4) Epoch 17, batch 950, loss[loss=0.1677, simple_loss=0.2624, pruned_loss=0.03649, over 18341.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2606, pruned_loss=0.04172, over 3571863.70 frames. ], batch size: 55, lr: 6.71e-03, grad_scale: 8.0 2023-03-09 09:07:14,612 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-09 09:07:17,791 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6556, 2.2358, 2.6937, 2.8508, 3.2760, 5.0095, 4.8549, 3.6117], device='cuda:3'), covar=tensor([0.1611, 0.2356, 0.2762, 0.1620, 0.2201, 0.0185, 0.0332, 0.0783], device='cuda:3'), in_proj_covar=tensor([0.0276, 0.0331, 0.0360, 0.0265, 0.0378, 0.0222, 0.0285, 0.0236], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 09:07:19,893 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=59131.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:07:23,771 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.59 vs. limit=2.0 2023-03-09 09:07:35,502 INFO [train.py:898] (3/4) Epoch 17, batch 1000, loss[loss=0.1693, simple_loss=0.2637, pruned_loss=0.0374, over 18343.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2602, pruned_loss=0.04162, over 3578778.55 frames. ], batch size: 56, lr: 6.71e-03, grad_scale: 16.0 2023-03-09 09:07:56,827 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.800e+02 2.712e+02 3.112e+02 3.820e+02 1.157e+03, threshold=6.224e+02, percent-clipped=3.0 2023-03-09 09:07:57,189 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=59164.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:08:33,589 INFO [train.py:898] (3/4) Epoch 17, batch 1050, loss[loss=0.1561, simple_loss=0.2429, pruned_loss=0.03463, over 18357.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2601, pruned_loss=0.04151, over 3591165.18 frames. ], batch size: 46, lr: 6.71e-03, grad_scale: 16.0 2023-03-09 09:08:43,549 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9651, 3.4201, 2.5917, 3.2926, 4.0194, 2.6167, 3.3347, 3.4699], device='cuda:3'), covar=tensor([0.0200, 0.1042, 0.1284, 0.0632, 0.0138, 0.1080, 0.0656, 0.0608], device='cuda:3'), in_proj_covar=tensor([0.0147, 0.0256, 0.0196, 0.0189, 0.0108, 0.0176, 0.0208, 0.0215], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 09:08:51,297 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7920, 4.7705, 4.8867, 4.5795, 4.6517, 4.6221, 5.0048, 4.9830], device='cuda:3'), covar=tensor([0.0074, 0.0075, 0.0064, 0.0116, 0.0062, 0.0168, 0.0087, 0.0105], device='cuda:3'), in_proj_covar=tensor([0.0087, 0.0064, 0.0068, 0.0087, 0.0070, 0.0097, 0.0081, 0.0081], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:3') 2023-03-09 09:09:06,578 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.6078, 6.1320, 5.5990, 5.9403, 5.7702, 5.5820, 6.2201, 6.1294], device='cuda:3'), covar=tensor([0.1188, 0.0771, 0.0396, 0.0691, 0.1316, 0.0694, 0.0517, 0.0671], device='cuda:3'), in_proj_covar=tensor([0.0588, 0.0504, 0.0368, 0.0531, 0.0722, 0.0527, 0.0710, 0.0534], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 09:09:08,977 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=59225.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:09:09,180 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.54 vs. limit=2.0 2023-03-09 09:09:32,434 INFO [train.py:898] (3/4) Epoch 17, batch 1100, loss[loss=0.1918, simple_loss=0.279, pruned_loss=0.0523, over 18562.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.26, pruned_loss=0.0416, over 3591627.22 frames. ], batch size: 54, lr: 6.70e-03, grad_scale: 16.0 2023-03-09 09:09:52,003 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=59262.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:09:54,097 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.737e+02 2.749e+02 3.337e+02 4.015e+02 7.609e+02, threshold=6.673e+02, percent-clipped=2.0 2023-03-09 09:10:31,685 INFO [train.py:898] (3/4) Epoch 17, batch 1150, loss[loss=0.1664, simple_loss=0.2452, pruned_loss=0.04379, over 18259.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2592, pruned_loss=0.04118, over 3586121.04 frames. ], batch size: 45, lr: 6.70e-03, grad_scale: 16.0 2023-03-09 09:11:19,771 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=59337.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:11:29,503 INFO [train.py:898] (3/4) Epoch 17, batch 1200, loss[loss=0.1569, simple_loss=0.2465, pruned_loss=0.03366, over 18489.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2588, pruned_loss=0.04084, over 3594409.53 frames. ], batch size: 47, lr: 6.70e-03, grad_scale: 16.0 2023-03-09 09:11:42,558 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7560, 3.8498, 5.1913, 4.5803, 3.3087, 3.2048, 4.4952, 5.4808], device='cuda:3'), covar=tensor([0.0837, 0.1478, 0.0132, 0.0311, 0.0912, 0.1026, 0.0344, 0.0126], device='cuda:3'), in_proj_covar=tensor([0.0144, 0.0262, 0.0132, 0.0173, 0.0184, 0.0184, 0.0186, 0.0177], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 09:11:51,078 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.028e+02 2.851e+02 3.293e+02 4.093e+02 6.542e+02, threshold=6.585e+02, percent-clipped=0.0 2023-03-09 09:12:15,541 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=59385.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:12:25,310 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6482, 2.3326, 2.6823, 2.7857, 3.3400, 5.0884, 4.9716, 3.5948], device='cuda:3'), covar=tensor([0.1560, 0.2209, 0.2687, 0.1598, 0.2042, 0.0153, 0.0281, 0.0759], device='cuda:3'), in_proj_covar=tensor([0.0279, 0.0334, 0.0362, 0.0265, 0.0379, 0.0223, 0.0286, 0.0236], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 09:12:28,177 INFO [train.py:898] (3/4) Epoch 17, batch 1250, loss[loss=0.2008, simple_loss=0.2888, pruned_loss=0.05641, over 18549.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2582, pruned_loss=0.0408, over 3599295.60 frames. ], batch size: 49, lr: 6.69e-03, grad_scale: 8.0 2023-03-09 09:13:05,711 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5144, 6.0616, 5.5381, 5.8703, 5.6950, 5.5842, 6.1737, 6.0925], device='cuda:3'), covar=tensor([0.1206, 0.0808, 0.0443, 0.0713, 0.1364, 0.0634, 0.0536, 0.0686], device='cuda:3'), in_proj_covar=tensor([0.0587, 0.0504, 0.0366, 0.0527, 0.0722, 0.0524, 0.0706, 0.0531], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 09:13:09,120 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=59431.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:13:21,962 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6234, 2.3398, 2.5742, 2.6912, 3.2794, 4.9791, 4.7182, 3.5499], device='cuda:3'), covar=tensor([0.1625, 0.2335, 0.2865, 0.1706, 0.2244, 0.0165, 0.0388, 0.0808], device='cuda:3'), in_proj_covar=tensor([0.0279, 0.0334, 0.0363, 0.0266, 0.0381, 0.0223, 0.0286, 0.0237], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 09:13:26,465 INFO [train.py:898] (3/4) Epoch 17, batch 1300, loss[loss=0.1878, simple_loss=0.2797, pruned_loss=0.0479, over 18250.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2588, pruned_loss=0.04092, over 3589805.69 frames. ], batch size: 60, lr: 6.69e-03, grad_scale: 8.0 2023-03-09 09:13:48,930 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.812e+02 2.789e+02 3.310e+02 4.090e+02 6.733e+02, threshold=6.621e+02, percent-clipped=2.0 2023-03-09 09:14:05,082 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=59479.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:14:06,357 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3936, 5.8826, 5.3884, 5.6918, 5.4528, 5.4566, 5.9461, 5.8966], device='cuda:3'), covar=tensor([0.1135, 0.0744, 0.0476, 0.0696, 0.1330, 0.0624, 0.0560, 0.0642], device='cuda:3'), in_proj_covar=tensor([0.0585, 0.0502, 0.0364, 0.0524, 0.0721, 0.0523, 0.0704, 0.0530], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 09:14:24,100 INFO [train.py:898] (3/4) Epoch 17, batch 1350, loss[loss=0.1721, simple_loss=0.2621, pruned_loss=0.04105, over 18556.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2575, pruned_loss=0.04071, over 3597478.41 frames. ], batch size: 49, lr: 6.69e-03, grad_scale: 8.0 2023-03-09 09:14:33,554 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=59502.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:14:47,739 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-09 09:14:53,671 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=59520.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:15:17,171 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.33 vs. limit=5.0 2023-03-09 09:15:22,940 INFO [train.py:898] (3/4) Epoch 17, batch 1400, loss[loss=0.1792, simple_loss=0.2734, pruned_loss=0.04252, over 18212.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.258, pruned_loss=0.04083, over 3602368.36 frames. ], batch size: 60, lr: 6.69e-03, grad_scale: 8.0 2023-03-09 09:15:43,379 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=59562.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:15:44,567 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=59563.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:15:46,382 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.823e+02 2.959e+02 3.481e+02 4.444e+02 9.729e+02, threshold=6.962e+02, percent-clipped=6.0 2023-03-09 09:16:19,228 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1802, 3.1349, 2.0592, 3.7531, 2.7109, 3.5315, 2.2585, 3.3126], device='cuda:3'), covar=tensor([0.0528, 0.0721, 0.1261, 0.0474, 0.0770, 0.0275, 0.1095, 0.0423], device='cuda:3'), in_proj_covar=tensor([0.0207, 0.0216, 0.0184, 0.0267, 0.0187, 0.0261, 0.0199, 0.0193], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 09:16:21,020 INFO [train.py:898] (3/4) Epoch 17, batch 1450, loss[loss=0.1875, simple_loss=0.2764, pruned_loss=0.04929, over 18042.00 frames. ], tot_loss[loss=0.17, simple_loss=0.2583, pruned_loss=0.04084, over 3588699.33 frames. ], batch size: 65, lr: 6.68e-03, grad_scale: 8.0 2023-03-09 09:16:28,558 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-09 09:16:32,776 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4533, 2.2498, 2.5240, 2.6266, 3.2263, 4.8529, 4.6812, 3.2138], device='cuda:3'), covar=tensor([0.1773, 0.2309, 0.2842, 0.1754, 0.2155, 0.0183, 0.0349, 0.0881], device='cuda:3'), in_proj_covar=tensor([0.0280, 0.0334, 0.0363, 0.0267, 0.0382, 0.0223, 0.0288, 0.0237], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 09:16:37,057 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8061, 3.0711, 4.3752, 3.7650, 2.9353, 4.6700, 3.9237, 3.1522], device='cuda:3'), covar=tensor([0.0408, 0.1273, 0.0233, 0.0379, 0.1372, 0.0176, 0.0544, 0.0801], device='cuda:3'), in_proj_covar=tensor([0.0204, 0.0232, 0.0192, 0.0153, 0.0219, 0.0201, 0.0234, 0.0194], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 09:16:40,250 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=59610.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:17:20,444 INFO [train.py:898] (3/4) Epoch 17, batch 1500, loss[loss=0.1761, simple_loss=0.2675, pruned_loss=0.04233, over 17709.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2589, pruned_loss=0.04101, over 3594107.94 frames. ], batch size: 70, lr: 6.68e-03, grad_scale: 8.0 2023-03-09 09:17:21,073 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-09 09:17:44,115 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.960e+02 2.886e+02 3.557e+02 4.310e+02 1.324e+03, threshold=7.115e+02, percent-clipped=4.0 2023-03-09 09:18:18,455 INFO [train.py:898] (3/4) Epoch 17, batch 1550, loss[loss=0.169, simple_loss=0.2563, pruned_loss=0.04081, over 18609.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2589, pruned_loss=0.04123, over 3596727.08 frames. ], batch size: 52, lr: 6.68e-03, grad_scale: 8.0 2023-03-09 09:18:24,080 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4691, 3.4212, 3.2975, 2.8131, 3.1008, 2.3647, 2.3700, 3.3476], device='cuda:3'), covar=tensor([0.0074, 0.0101, 0.0107, 0.0191, 0.0132, 0.0297, 0.0296, 0.0086], device='cuda:3'), in_proj_covar=tensor([0.0124, 0.0146, 0.0125, 0.0178, 0.0131, 0.0170, 0.0173, 0.0109], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 09:19:16,813 INFO [train.py:898] (3/4) Epoch 17, batch 1600, loss[loss=0.1825, simple_loss=0.2727, pruned_loss=0.04617, over 18017.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2591, pruned_loss=0.04141, over 3592341.79 frames. ], batch size: 65, lr: 6.67e-03, grad_scale: 8.0 2023-03-09 09:19:31,348 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.77 vs. limit=2.0 2023-03-09 09:19:41,443 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.908e+02 2.635e+02 3.092e+02 3.637e+02 7.676e+02, threshold=6.183e+02, percent-clipped=1.0 2023-03-09 09:20:15,865 INFO [train.py:898] (3/4) Epoch 17, batch 1650, loss[loss=0.1721, simple_loss=0.2644, pruned_loss=0.03991, over 18484.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.259, pruned_loss=0.0414, over 3580076.58 frames. ], batch size: 53, lr: 6.67e-03, grad_scale: 8.0 2023-03-09 09:20:20,814 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-09 09:20:46,592 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=59820.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:21:14,488 INFO [train.py:898] (3/4) Epoch 17, batch 1700, loss[loss=0.1933, simple_loss=0.2846, pruned_loss=0.05099, over 18392.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2586, pruned_loss=0.04128, over 3582335.27 frames. ], batch size: 56, lr: 6.67e-03, grad_scale: 8.0 2023-03-09 09:21:18,221 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=59848.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:21:30,978 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=59858.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:21:38,856 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.932e+02 2.816e+02 3.196e+02 3.802e+02 1.399e+03, threshold=6.391e+02, percent-clipped=5.0 2023-03-09 09:21:39,929 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3986, 3.2999, 2.2446, 4.2991, 3.0155, 4.1489, 2.2393, 3.7687], device='cuda:3'), covar=tensor([0.0714, 0.0876, 0.1466, 0.0464, 0.0872, 0.0337, 0.1333, 0.0452], device='cuda:3'), in_proj_covar=tensor([0.0206, 0.0216, 0.0184, 0.0266, 0.0187, 0.0259, 0.0197, 0.0192], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 09:21:43,512 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=59868.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:22:07,244 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=59889.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:22:13,215 INFO [train.py:898] (3/4) Epoch 17, batch 1750, loss[loss=0.1815, simple_loss=0.2671, pruned_loss=0.04794, over 18388.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.258, pruned_loss=0.04116, over 3577950.88 frames. ], batch size: 50, lr: 6.67e-03, grad_scale: 8.0 2023-03-09 09:22:30,609 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=59909.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:22:54,058 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6750, 5.2703, 5.2383, 5.2718, 4.7313, 5.1116, 4.5489, 5.0859], device='cuda:3'), covar=tensor([0.0273, 0.0306, 0.0206, 0.0375, 0.0445, 0.0221, 0.1195, 0.0346], device='cuda:3'), in_proj_covar=tensor([0.0203, 0.0249, 0.0241, 0.0304, 0.0259, 0.0252, 0.0298, 0.0247], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 09:23:11,679 INFO [train.py:898] (3/4) Epoch 17, batch 1800, loss[loss=0.1635, simple_loss=0.2598, pruned_loss=0.03358, over 18557.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2584, pruned_loss=0.041, over 3580076.39 frames. ], batch size: 54, lr: 6.66e-03, grad_scale: 8.0 2023-03-09 09:23:17,880 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=59950.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:23:35,354 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.961e+02 2.799e+02 3.065e+02 3.642e+02 5.911e+02, threshold=6.130e+02, percent-clipped=0.0 2023-03-09 09:23:46,019 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1564, 5.5907, 5.6002, 5.6135, 5.1496, 5.4839, 4.9547, 5.5353], device='cuda:3'), covar=tensor([0.0189, 0.0235, 0.0144, 0.0330, 0.0318, 0.0189, 0.0819, 0.0230], device='cuda:3'), in_proj_covar=tensor([0.0200, 0.0246, 0.0239, 0.0302, 0.0256, 0.0250, 0.0294, 0.0244], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 09:24:10,360 INFO [train.py:898] (3/4) Epoch 17, batch 1850, loss[loss=0.1546, simple_loss=0.2348, pruned_loss=0.03718, over 17708.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2583, pruned_loss=0.04095, over 3586189.22 frames. ], batch size: 39, lr: 6.66e-03, grad_scale: 8.0 2023-03-09 09:25:08,490 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=60040.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:25:13,960 INFO [train.py:898] (3/4) Epoch 17, batch 1900, loss[loss=0.1508, simple_loss=0.237, pruned_loss=0.03227, over 18432.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2578, pruned_loss=0.04062, over 3596998.52 frames. ], batch size: 42, lr: 6.66e-03, grad_scale: 8.0 2023-03-09 09:25:14,409 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6443, 4.1679, 2.6514, 4.0105, 3.9916, 4.1609, 4.0515, 2.6663], device='cuda:3'), covar=tensor([0.0198, 0.0081, 0.0702, 0.0171, 0.0091, 0.0088, 0.0109, 0.0913], device='cuda:3'), in_proj_covar=tensor([0.0084, 0.0077, 0.0093, 0.0091, 0.0083, 0.0072, 0.0083, 0.0095], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 09:25:22,433 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=60052.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:25:37,668 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.972e+02 2.751e+02 3.346e+02 4.320e+02 1.006e+03, threshold=6.692e+02, percent-clipped=5.0 2023-03-09 09:25:38,025 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=60065.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:25:53,544 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=60078.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:26:12,273 INFO [train.py:898] (3/4) Epoch 17, batch 1950, loss[loss=0.1699, simple_loss=0.2554, pruned_loss=0.04222, over 18428.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2582, pruned_loss=0.04079, over 3590114.30 frames. ], batch size: 48, lr: 6.66e-03, grad_scale: 8.0 2023-03-09 09:26:19,460 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=60101.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 09:26:33,473 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=60113.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 09:26:48,393 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=60126.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:27:04,099 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=60139.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:27:08,980 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.95 vs. limit=2.0 2023-03-09 09:27:10,624 INFO [train.py:898] (3/4) Epoch 17, batch 2000, loss[loss=0.1797, simple_loss=0.2783, pruned_loss=0.04053, over 18405.00 frames. ], tot_loss[loss=0.17, simple_loss=0.2584, pruned_loss=0.04084, over 3592262.91 frames. ], batch size: 52, lr: 6.65e-03, grad_scale: 8.0 2023-03-09 09:27:25,224 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60158.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:27:33,337 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.223e+02 2.887e+02 3.368e+02 4.161e+02 9.381e+02, threshold=6.736e+02, percent-clipped=4.0 2023-03-09 09:27:33,955 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.66 vs. limit=5.0 2023-03-09 09:27:51,537 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5638, 2.7358, 2.4361, 2.8920, 3.5024, 3.4489, 3.0956, 2.8270], device='cuda:3'), covar=tensor([0.0170, 0.0266, 0.0560, 0.0352, 0.0197, 0.0159, 0.0349, 0.0404], device='cuda:3'), in_proj_covar=tensor([0.0133, 0.0126, 0.0160, 0.0150, 0.0119, 0.0106, 0.0148, 0.0147], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 09:28:07,234 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8439, 3.8109, 3.6613, 3.3111, 3.5560, 2.9793, 2.9191, 3.8213], device='cuda:3'), covar=tensor([0.0052, 0.0071, 0.0076, 0.0126, 0.0083, 0.0171, 0.0192, 0.0050], device='cuda:3'), in_proj_covar=tensor([0.0124, 0.0147, 0.0125, 0.0179, 0.0131, 0.0170, 0.0174, 0.0110], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 09:28:08,989 INFO [train.py:898] (3/4) Epoch 17, batch 2050, loss[loss=0.164, simple_loss=0.2437, pruned_loss=0.04215, over 18268.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2583, pruned_loss=0.04103, over 3590200.92 frames. ], batch size: 45, lr: 6.65e-03, grad_scale: 8.0 2023-03-09 09:28:09,559 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-09 09:28:12,785 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=60198.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:28:19,681 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60204.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:28:21,876 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=60206.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:28:28,890 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2733, 5.1833, 5.5893, 5.5779, 5.2685, 6.1088, 5.7662, 5.4062], device='cuda:3'), covar=tensor([0.1444, 0.0678, 0.0743, 0.0727, 0.1598, 0.0882, 0.0702, 0.1812], device='cuda:3'), in_proj_covar=tensor([0.0339, 0.0274, 0.0294, 0.0291, 0.0322, 0.0405, 0.0264, 0.0394], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0002, 0.0004], device='cuda:3') 2023-03-09 09:28:56,409 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9489, 3.5945, 5.0135, 2.9332, 4.4136, 2.6835, 3.2152, 1.7452], device='cuda:3'), covar=tensor([0.1102, 0.0887, 0.0127, 0.0858, 0.0514, 0.2269, 0.2395, 0.2053], device='cuda:3'), in_proj_covar=tensor([0.0215, 0.0233, 0.0160, 0.0187, 0.0246, 0.0260, 0.0312, 0.0225], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 09:28:58,167 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4639, 2.1489, 2.3676, 2.3966, 2.9827, 4.7405, 4.5184, 3.6099], device='cuda:3'), covar=tensor([0.1751, 0.2684, 0.3136, 0.2017, 0.2523, 0.0229, 0.0406, 0.0686], device='cuda:3'), in_proj_covar=tensor([0.0280, 0.0333, 0.0361, 0.0266, 0.0380, 0.0223, 0.0287, 0.0237], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 09:29:07,879 INFO [train.py:898] (3/4) Epoch 17, batch 2100, loss[loss=0.2329, simple_loss=0.3093, pruned_loss=0.07821, over 12530.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2587, pruned_loss=0.04115, over 3583526.46 frames. ], batch size: 129, lr: 6.65e-03, grad_scale: 8.0 2023-03-09 09:29:08,080 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60245.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:29:24,107 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=60259.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:29:30,558 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.013e+02 2.803e+02 3.288e+02 3.914e+02 1.145e+03, threshold=6.576e+02, percent-clipped=2.0 2023-03-09 09:30:07,002 INFO [train.py:898] (3/4) Epoch 17, batch 2150, loss[loss=0.1923, simple_loss=0.279, pruned_loss=0.05279, over 18631.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2577, pruned_loss=0.04109, over 3591252.85 frames. ], batch size: 52, lr: 6.64e-03, grad_scale: 8.0 2023-03-09 09:30:19,966 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.87 vs. limit=5.0 2023-03-09 09:31:03,793 INFO [train.py:898] (3/4) Epoch 17, batch 2200, loss[loss=0.17, simple_loss=0.26, pruned_loss=0.04003, over 18314.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2587, pruned_loss=0.04154, over 3586635.78 frames. ], batch size: 54, lr: 6.64e-03, grad_scale: 8.0 2023-03-09 09:31:09,205 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4864, 2.1677, 2.4929, 2.5179, 3.0852, 4.7238, 4.4724, 3.6715], device='cuda:3'), covar=tensor([0.1619, 0.2311, 0.2879, 0.1735, 0.2184, 0.0213, 0.0393, 0.0653], device='cuda:3'), in_proj_covar=tensor([0.0280, 0.0334, 0.0363, 0.0267, 0.0381, 0.0223, 0.0287, 0.0238], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 09:31:26,300 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.966e+02 2.999e+02 3.869e+02 4.912e+02 1.337e+03, threshold=7.738e+02, percent-clipped=7.0 2023-03-09 09:32:01,242 INFO [train.py:898] (3/4) Epoch 17, batch 2250, loss[loss=0.1561, simple_loss=0.2352, pruned_loss=0.0385, over 17245.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.2591, pruned_loss=0.04169, over 3589238.96 frames. ], batch size: 38, lr: 6.64e-03, grad_scale: 4.0 2023-03-09 09:32:02,632 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60396.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 09:32:16,877 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60408.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 09:32:17,252 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.46 vs. limit=2.0 2023-03-09 09:32:20,576 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-09 09:32:31,522 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60421.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:32:35,093 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7974, 3.6843, 3.6570, 3.2952, 3.6056, 2.9008, 2.8213, 3.8838], device='cuda:3'), covar=tensor([0.0071, 0.0114, 0.0088, 0.0150, 0.0088, 0.0201, 0.0213, 0.0059], device='cuda:3'), in_proj_covar=tensor([0.0125, 0.0148, 0.0126, 0.0179, 0.0131, 0.0171, 0.0175, 0.0111], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 09:32:47,122 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60434.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:32:47,614 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-09 09:32:58,977 INFO [train.py:898] (3/4) Epoch 17, batch 2300, loss[loss=0.1696, simple_loss=0.2652, pruned_loss=0.03704, over 16414.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2586, pruned_loss=0.04134, over 3590881.07 frames. ], batch size: 95, lr: 6.64e-03, grad_scale: 4.0 2023-03-09 09:32:59,333 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0412, 5.2682, 2.7334, 5.1282, 4.9739, 5.3268, 5.1548, 2.7219], device='cuda:3'), covar=tensor([0.0175, 0.0070, 0.0804, 0.0072, 0.0078, 0.0064, 0.0079, 0.0932], device='cuda:3'), in_proj_covar=tensor([0.0084, 0.0078, 0.0094, 0.0092, 0.0084, 0.0072, 0.0083, 0.0095], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 09:33:16,388 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.55 vs. limit=2.0 2023-03-09 09:33:23,538 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.591e+02 2.714e+02 3.247e+02 3.774e+02 7.565e+02, threshold=6.493e+02, percent-clipped=0.0 2023-03-09 09:33:55,762 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8453, 4.9072, 5.0171, 4.6245, 4.7540, 4.7171, 5.0262, 5.0604], device='cuda:3'), covar=tensor([0.0081, 0.0071, 0.0063, 0.0116, 0.0077, 0.0161, 0.0085, 0.0157], device='cuda:3'), in_proj_covar=tensor([0.0089, 0.0064, 0.0069, 0.0089, 0.0070, 0.0098, 0.0081, 0.0083], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 09:33:57,582 INFO [train.py:898] (3/4) Epoch 17, batch 2350, loss[loss=0.1828, simple_loss=0.2746, pruned_loss=0.04554, over 16203.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2592, pruned_loss=0.0414, over 3585851.92 frames. ], batch size: 94, lr: 6.63e-03, grad_scale: 4.0 2023-03-09 09:34:09,152 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60504.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:34:56,309 INFO [train.py:898] (3/4) Epoch 17, batch 2400, loss[loss=0.1552, simple_loss=0.2401, pruned_loss=0.03517, over 18364.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2597, pruned_loss=0.04129, over 3593900.24 frames. ], batch size: 46, lr: 6.63e-03, grad_scale: 8.0 2023-03-09 09:34:56,560 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60545.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:35:05,014 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=60552.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:35:07,319 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60554.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:35:20,247 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.011e+02 2.942e+02 3.643e+02 4.208e+02 9.657e+02, threshold=7.287e+02, percent-clipped=4.0 2023-03-09 09:35:52,017 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=60593.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:35:54,160 INFO [train.py:898] (3/4) Epoch 17, batch 2450, loss[loss=0.1515, simple_loss=0.2365, pruned_loss=0.03327, over 18431.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.2598, pruned_loss=0.04139, over 3592865.68 frames. ], batch size: 43, lr: 6.63e-03, grad_scale: 8.0 2023-03-09 09:36:23,285 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0776, 5.1080, 5.1351, 4.8953, 4.8869, 4.9652, 5.2777, 5.2560], device='cuda:3'), covar=tensor([0.0068, 0.0059, 0.0060, 0.0097, 0.0057, 0.0141, 0.0070, 0.0112], device='cuda:3'), in_proj_covar=tensor([0.0089, 0.0065, 0.0069, 0.0089, 0.0071, 0.0099, 0.0082, 0.0084], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 09:36:26,565 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.6348, 6.1634, 5.6108, 5.9241, 5.7568, 5.6319, 6.2249, 6.1807], device='cuda:3'), covar=tensor([0.1376, 0.0699, 0.0417, 0.0729, 0.1451, 0.0771, 0.0587, 0.0608], device='cuda:3'), in_proj_covar=tensor([0.0588, 0.0502, 0.0367, 0.0522, 0.0718, 0.0525, 0.0707, 0.0530], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 09:36:45,539 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5745, 3.4322, 2.3285, 4.3651, 3.1881, 4.2571, 2.4847, 3.9715], device='cuda:3'), covar=tensor([0.0586, 0.0785, 0.1440, 0.0470, 0.0796, 0.0307, 0.1176, 0.0394], device='cuda:3'), in_proj_covar=tensor([0.0209, 0.0219, 0.0189, 0.0274, 0.0189, 0.0263, 0.0201, 0.0196], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 09:36:52,590 INFO [train.py:898] (3/4) Epoch 17, batch 2500, loss[loss=0.1561, simple_loss=0.2446, pruned_loss=0.03384, over 18287.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.2595, pruned_loss=0.04143, over 3586686.96 frames. ], batch size: 49, lr: 6.63e-03, grad_scale: 8.0 2023-03-09 09:37:17,216 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.797e+02 2.798e+02 3.205e+02 3.786e+02 6.512e+02, threshold=6.411e+02, percent-clipped=0.0 2023-03-09 09:37:26,888 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-09 09:37:51,487 INFO [train.py:898] (3/4) Epoch 17, batch 2550, loss[loss=0.1839, simple_loss=0.2743, pruned_loss=0.04672, over 18097.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2584, pruned_loss=0.04092, over 3595848.39 frames. ], batch size: 62, lr: 6.62e-03, grad_scale: 8.0 2023-03-09 09:37:53,008 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60696.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:38:06,239 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60708.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:38:18,163 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=60718.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:38:21,265 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60721.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:38:36,111 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60734.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:38:38,471 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5943, 2.1699, 2.5932, 2.6984, 3.3652, 4.8799, 4.5385, 3.3859], device='cuda:3'), covar=tensor([0.1661, 0.2397, 0.2859, 0.1661, 0.2004, 0.0190, 0.0412, 0.0823], device='cuda:3'), in_proj_covar=tensor([0.0283, 0.0336, 0.0366, 0.0269, 0.0384, 0.0226, 0.0290, 0.0241], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:3') 2023-03-09 09:38:47,883 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=60744.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:38:48,803 INFO [train.py:898] (3/4) Epoch 17, batch 2600, loss[loss=0.173, simple_loss=0.268, pruned_loss=0.03905, over 18563.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2592, pruned_loss=0.04124, over 3583529.03 frames. ], batch size: 54, lr: 6.62e-03, grad_scale: 8.0 2023-03-09 09:38:59,954 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8193, 3.1417, 2.5220, 3.2771, 3.8346, 3.7139, 3.4574, 3.2577], device='cuda:3'), covar=tensor([0.0247, 0.0219, 0.0619, 0.0292, 0.0140, 0.0147, 0.0263, 0.0299], device='cuda:3'), in_proj_covar=tensor([0.0135, 0.0129, 0.0161, 0.0153, 0.0122, 0.0108, 0.0151, 0.0149], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 09:39:01,920 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=60756.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:39:13,463 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.006e+02 2.726e+02 3.213e+02 3.687e+02 6.855e+02, threshold=6.427e+02, percent-clipped=2.0 2023-03-09 09:39:16,947 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=60769.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:39:28,309 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=60779.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:39:31,187 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=60782.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:39:45,259 INFO [train.py:898] (3/4) Epoch 17, batch 2650, loss[loss=0.1869, simple_loss=0.2754, pruned_loss=0.0492, over 15910.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2593, pruned_loss=0.04136, over 3596035.95 frames. ], batch size: 94, lr: 6.62e-03, grad_scale: 8.0 2023-03-09 09:39:51,050 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5006, 6.0726, 5.5607, 5.8188, 5.6272, 5.4361, 6.0911, 6.0946], device='cuda:3'), covar=tensor([0.1124, 0.0696, 0.0406, 0.0648, 0.1400, 0.0672, 0.0571, 0.0634], device='cuda:3'), in_proj_covar=tensor([0.0581, 0.0502, 0.0362, 0.0519, 0.0712, 0.0519, 0.0704, 0.0526], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 09:40:27,368 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.16 vs. limit=5.0 2023-03-09 09:40:43,823 INFO [train.py:898] (3/4) Epoch 17, batch 2700, loss[loss=0.1669, simple_loss=0.2597, pruned_loss=0.03708, over 18496.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2596, pruned_loss=0.04126, over 3598204.68 frames. ], batch size: 51, lr: 6.61e-03, grad_scale: 8.0 2023-03-09 09:40:54,830 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60854.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:41:08,389 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.851e+02 2.950e+02 3.306e+02 4.033e+02 9.458e+02, threshold=6.612e+02, percent-clipped=5.0 2023-03-09 09:41:42,538 INFO [train.py:898] (3/4) Epoch 17, batch 2750, loss[loss=0.1675, simple_loss=0.2562, pruned_loss=0.03943, over 17726.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2587, pruned_loss=0.04104, over 3597614.51 frames. ], batch size: 70, lr: 6.61e-03, grad_scale: 8.0 2023-03-09 09:41:51,127 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=60902.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:42:39,342 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6324, 5.1748, 5.1320, 5.1546, 4.6394, 5.0179, 4.4292, 5.0169], device='cuda:3'), covar=tensor([0.0275, 0.0304, 0.0221, 0.0452, 0.0427, 0.0246, 0.1182, 0.0373], device='cuda:3'), in_proj_covar=tensor([0.0204, 0.0250, 0.0242, 0.0306, 0.0258, 0.0253, 0.0297, 0.0247], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 09:42:41,194 INFO [train.py:898] (3/4) Epoch 17, batch 2800, loss[loss=0.18, simple_loss=0.2723, pruned_loss=0.04385, over 18363.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2591, pruned_loss=0.04088, over 3592346.22 frames. ], batch size: 56, lr: 6.61e-03, grad_scale: 8.0 2023-03-09 09:43:06,399 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.038e+02 2.738e+02 3.235e+02 3.720e+02 7.893e+02, threshold=6.471e+02, percent-clipped=2.0 2023-03-09 09:43:16,081 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-09 09:43:40,114 INFO [train.py:898] (3/4) Epoch 17, batch 2850, loss[loss=0.1539, simple_loss=0.2433, pruned_loss=0.03225, over 18263.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.259, pruned_loss=0.04083, over 3592949.62 frames. ], batch size: 49, lr: 6.61e-03, grad_scale: 8.0 2023-03-09 09:43:53,455 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9736, 3.8114, 5.0579, 4.5216, 3.3187, 3.0892, 4.5348, 5.3500], device='cuda:3'), covar=tensor([0.0774, 0.1458, 0.0188, 0.0383, 0.0971, 0.1204, 0.0410, 0.0244], device='cuda:3'), in_proj_covar=tensor([0.0145, 0.0265, 0.0137, 0.0176, 0.0186, 0.0187, 0.0188, 0.0181], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 09:44:27,394 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8248, 3.7200, 3.6439, 3.1744, 3.5494, 2.9506, 2.8221, 3.8043], device='cuda:3'), covar=tensor([0.0052, 0.0084, 0.0070, 0.0129, 0.0081, 0.0165, 0.0185, 0.0058], device='cuda:3'), in_proj_covar=tensor([0.0127, 0.0150, 0.0129, 0.0180, 0.0133, 0.0172, 0.0176, 0.0111], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 09:44:39,447 INFO [train.py:898] (3/4) Epoch 17, batch 2900, loss[loss=0.177, simple_loss=0.2652, pruned_loss=0.04445, over 18548.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2585, pruned_loss=0.04082, over 3592368.27 frames. ], batch size: 49, lr: 6.60e-03, grad_scale: 8.0 2023-03-09 09:44:58,364 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=61061.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:45:04,193 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.725e+02 2.779e+02 3.469e+02 4.268e+02 9.879e+02, threshold=6.938e+02, percent-clipped=3.0 2023-03-09 09:45:13,445 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=61074.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:45:34,725 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6445, 3.6260, 3.4802, 3.1144, 3.3769, 2.7485, 2.7153, 3.7364], device='cuda:3'), covar=tensor([0.0096, 0.0084, 0.0077, 0.0133, 0.0107, 0.0196, 0.0196, 0.0061], device='cuda:3'), in_proj_covar=tensor([0.0126, 0.0149, 0.0128, 0.0179, 0.0132, 0.0172, 0.0175, 0.0111], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 09:45:37,772 INFO [train.py:898] (3/4) Epoch 17, batch 2950, loss[loss=0.1662, simple_loss=0.2473, pruned_loss=0.04254, over 18384.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2587, pruned_loss=0.04107, over 3588325.58 frames. ], batch size: 46, lr: 6.60e-03, grad_scale: 4.0 2023-03-09 09:45:59,995 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2837, 5.1845, 5.5110, 5.6049, 5.2316, 6.1084, 5.7473, 5.3586], device='cuda:3'), covar=tensor([0.1062, 0.0671, 0.0734, 0.0749, 0.1574, 0.0780, 0.0625, 0.1555], device='cuda:3'), in_proj_covar=tensor([0.0332, 0.0266, 0.0289, 0.0285, 0.0314, 0.0396, 0.0258, 0.0385], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0002, 0.0004], device='cuda:3') 2023-03-09 09:46:09,530 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=61122.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:46:36,107 INFO [train.py:898] (3/4) Epoch 17, batch 3000, loss[loss=0.1648, simple_loss=0.2526, pruned_loss=0.03851, over 18370.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2584, pruned_loss=0.04137, over 3585447.12 frames. ], batch size: 50, lr: 6.60e-03, grad_scale: 4.0 2023-03-09 09:46:36,108 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 09:46:48,251 INFO [train.py:932] (3/4) Epoch 17, validation: loss=0.1521, simple_loss=0.2525, pruned_loss=0.02589, over 944034.00 frames. 2023-03-09 09:46:48,252 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 09:47:00,166 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0114, 5.0172, 5.0775, 4.9035, 4.7602, 4.9266, 5.2175, 5.1643], device='cuda:3'), covar=tensor([0.0066, 0.0067, 0.0059, 0.0096, 0.0069, 0.0114, 0.0072, 0.0134], device='cuda:3'), in_proj_covar=tensor([0.0089, 0.0064, 0.0069, 0.0088, 0.0070, 0.0098, 0.0082, 0.0083], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 09:47:14,275 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.190e+02 3.156e+02 3.810e+02 4.470e+02 9.676e+02, threshold=7.619e+02, percent-clipped=4.0 2023-03-09 09:47:46,766 INFO [train.py:898] (3/4) Epoch 17, batch 3050, loss[loss=0.1825, simple_loss=0.2629, pruned_loss=0.05111, over 16258.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2581, pruned_loss=0.04121, over 3588235.24 frames. ], batch size: 94, lr: 6.60e-03, grad_scale: 4.0 2023-03-09 09:48:44,415 INFO [train.py:898] (3/4) Epoch 17, batch 3100, loss[loss=0.1812, simple_loss=0.2722, pruned_loss=0.0451, over 18477.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2584, pruned_loss=0.04111, over 3595651.70 frames. ], batch size: 53, lr: 6.59e-03, grad_scale: 4.0 2023-03-09 09:49:09,870 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.926e+02 2.710e+02 3.175e+02 4.043e+02 1.016e+03, threshold=6.350e+02, percent-clipped=3.0 2023-03-09 09:49:42,486 INFO [train.py:898] (3/4) Epoch 17, batch 3150, loss[loss=0.1698, simple_loss=0.2633, pruned_loss=0.03817, over 18492.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2584, pruned_loss=0.04122, over 3589322.10 frames. ], batch size: 53, lr: 6.59e-03, grad_scale: 4.0 2023-03-09 09:49:45,042 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0928, 3.4002, 3.3593, 2.8718, 2.9717, 2.7999, 2.5149, 2.1818], device='cuda:3'), covar=tensor([0.0255, 0.0182, 0.0152, 0.0309, 0.0359, 0.0254, 0.0598, 0.0809], device='cuda:3'), in_proj_covar=tensor([0.0067, 0.0056, 0.0059, 0.0066, 0.0087, 0.0063, 0.0076, 0.0082], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 09:49:49,777 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5532, 2.2026, 2.5711, 2.5980, 3.2100, 4.7940, 4.5694, 3.2957], device='cuda:3'), covar=tensor([0.1632, 0.2307, 0.2645, 0.1744, 0.2145, 0.0179, 0.0375, 0.0845], device='cuda:3'), in_proj_covar=tensor([0.0283, 0.0334, 0.0366, 0.0270, 0.0384, 0.0225, 0.0289, 0.0241], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 09:50:40,782 INFO [train.py:898] (3/4) Epoch 17, batch 3200, loss[loss=0.1779, simple_loss=0.2642, pruned_loss=0.04581, over 18488.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.258, pruned_loss=0.0412, over 3587117.63 frames. ], batch size: 53, lr: 6.59e-03, grad_scale: 8.0 2023-03-09 09:50:59,904 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0405, 5.1012, 5.1962, 4.9689, 4.8927, 4.9096, 5.2477, 5.2362], device='cuda:3'), covar=tensor([0.0056, 0.0057, 0.0041, 0.0080, 0.0052, 0.0163, 0.0062, 0.0075], device='cuda:3'), in_proj_covar=tensor([0.0088, 0.0064, 0.0069, 0.0087, 0.0069, 0.0098, 0.0082, 0.0083], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:3') 2023-03-09 09:51:06,763 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.872e+02 2.721e+02 3.197e+02 3.884e+02 6.601e+02, threshold=6.394e+02, percent-clipped=2.0 2023-03-09 09:51:10,498 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3522, 5.9348, 5.3668, 5.7017, 5.4757, 5.3532, 5.9772, 5.9098], device='cuda:3'), covar=tensor([0.1209, 0.0686, 0.0527, 0.0684, 0.1479, 0.0685, 0.0597, 0.0655], device='cuda:3'), in_proj_covar=tensor([0.0595, 0.0510, 0.0373, 0.0535, 0.0730, 0.0530, 0.0720, 0.0541], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 09:51:14,928 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=61374.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:51:38,940 INFO [train.py:898] (3/4) Epoch 17, batch 3250, loss[loss=0.175, simple_loss=0.2662, pruned_loss=0.04191, over 18351.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2581, pruned_loss=0.04124, over 3586211.08 frames. ], batch size: 56, lr: 6.59e-03, grad_scale: 8.0 2023-03-09 09:51:45,655 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-09 09:52:05,274 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=61417.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:52:10,914 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=61422.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:52:37,391 INFO [train.py:898] (3/4) Epoch 17, batch 3300, loss[loss=0.163, simple_loss=0.254, pruned_loss=0.03596, over 18287.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.259, pruned_loss=0.0417, over 3587014.33 frames. ], batch size: 49, lr: 6.58e-03, grad_scale: 8.0 2023-03-09 09:53:01,556 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6141, 2.7626, 2.6149, 2.9479, 3.6364, 3.5243, 3.1133, 3.0319], device='cuda:3'), covar=tensor([0.0186, 0.0297, 0.0516, 0.0325, 0.0176, 0.0171, 0.0369, 0.0367], device='cuda:3'), in_proj_covar=tensor([0.0131, 0.0126, 0.0158, 0.0151, 0.0118, 0.0107, 0.0148, 0.0144], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 09:53:02,170 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.987e+02 2.924e+02 3.407e+02 4.178e+02 8.081e+02, threshold=6.814e+02, percent-clipped=6.0 2023-03-09 09:53:14,813 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-09 09:53:35,257 INFO [train.py:898] (3/4) Epoch 17, batch 3350, loss[loss=0.1846, simple_loss=0.2699, pruned_loss=0.04965, over 18349.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.2591, pruned_loss=0.04172, over 3587248.22 frames. ], batch size: 56, lr: 6.58e-03, grad_scale: 8.0 2023-03-09 09:54:33,205 INFO [train.py:898] (3/4) Epoch 17, batch 3400, loss[loss=0.1714, simple_loss=0.2587, pruned_loss=0.04208, over 17117.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2592, pruned_loss=0.04146, over 3590632.80 frames. ], batch size: 78, lr: 6.58e-03, grad_scale: 8.0 2023-03-09 09:54:58,506 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.971e+02 2.720e+02 3.056e+02 3.431e+02 6.999e+02, threshold=6.113e+02, percent-clipped=1.0 2023-03-09 09:55:31,067 INFO [train.py:898] (3/4) Epoch 17, batch 3450, loss[loss=0.1869, simple_loss=0.2699, pruned_loss=0.05194, over 18299.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.2596, pruned_loss=0.04141, over 3585257.27 frames. ], batch size: 57, lr: 6.57e-03, grad_scale: 8.0 2023-03-09 09:56:30,011 INFO [train.py:898] (3/4) Epoch 17, batch 3500, loss[loss=0.1708, simple_loss=0.2729, pruned_loss=0.03429, over 18074.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2606, pruned_loss=0.04145, over 3573967.22 frames. ], batch size: 62, lr: 6.57e-03, grad_scale: 8.0 2023-03-09 09:56:38,569 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0148, 4.8001, 4.8536, 3.6593, 4.0518, 3.7614, 2.9541, 2.5439], device='cuda:3'), covar=tensor([0.0207, 0.0123, 0.0057, 0.0266, 0.0284, 0.0180, 0.0659, 0.0819], device='cuda:3'), in_proj_covar=tensor([0.0066, 0.0055, 0.0058, 0.0065, 0.0085, 0.0062, 0.0075, 0.0081], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 09:56:56,114 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.780e+02 2.828e+02 3.319e+02 4.046e+02 7.174e+02, threshold=6.638e+02, percent-clipped=2.0 2023-03-09 09:57:25,973 INFO [train.py:898] (3/4) Epoch 17, batch 3550, loss[loss=0.1549, simple_loss=0.2524, pruned_loss=0.02873, over 18374.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2595, pruned_loss=0.0411, over 3579598.17 frames. ], batch size: 55, lr: 6.57e-03, grad_scale: 8.0 2023-03-09 09:57:27,441 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7482, 2.8715, 2.6845, 3.0790, 3.6950, 3.6142, 3.2382, 3.0567], device='cuda:3'), covar=tensor([0.0187, 0.0290, 0.0519, 0.0346, 0.0183, 0.0172, 0.0362, 0.0377], device='cuda:3'), in_proj_covar=tensor([0.0133, 0.0126, 0.0159, 0.0151, 0.0119, 0.0107, 0.0148, 0.0146], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 09:57:36,327 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2058, 3.7720, 5.1005, 2.9043, 4.5255, 2.5910, 3.1308, 1.7457], device='cuda:3'), covar=tensor([0.0966, 0.0948, 0.0135, 0.0886, 0.0497, 0.2641, 0.2640, 0.2220], device='cuda:3'), in_proj_covar=tensor([0.0218, 0.0239, 0.0168, 0.0190, 0.0251, 0.0266, 0.0319, 0.0230], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 09:57:50,674 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=61717.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:58:07,040 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=61732.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:58:18,922 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=61743.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:58:20,836 INFO [train.py:898] (3/4) Epoch 17, batch 3600, loss[loss=0.1802, simple_loss=0.2731, pruned_loss=0.04366, over 18274.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.2598, pruned_loss=0.04143, over 3572408.94 frames. ], batch size: 60, lr: 6.57e-03, grad_scale: 8.0 2023-03-09 09:58:42,562 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=61765.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 09:58:44,428 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.882e+02 3.033e+02 3.500e+02 4.086e+02 7.199e+02, threshold=7.000e+02, percent-clipped=1.0 2023-03-09 09:59:23,037 INFO [train.py:898] (3/4) Epoch 18, batch 0, loss[loss=0.1801, simple_loss=0.2728, pruned_loss=0.04374, over 18333.00 frames. ], tot_loss[loss=0.1801, simple_loss=0.2728, pruned_loss=0.04374, over 18333.00 frames. ], batch size: 56, lr: 6.38e-03, grad_scale: 8.0 2023-03-09 09:59:23,038 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 09:59:34,862 INFO [train.py:932] (3/4) Epoch 18, validation: loss=0.1526, simple_loss=0.2531, pruned_loss=0.0261, over 944034.00 frames. 2023-03-09 09:59:34,863 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 09:59:50,969 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=61793.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:00:05,772 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=61804.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:00:34,728 INFO [train.py:898] (3/4) Epoch 18, batch 50, loss[loss=0.1434, simple_loss=0.2244, pruned_loss=0.03119, over 18492.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2559, pruned_loss=0.04066, over 813408.96 frames. ], batch size: 44, lr: 6.37e-03, grad_scale: 8.0 2023-03-09 10:01:03,041 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.98 vs. limit=5.0 2023-03-09 10:01:20,174 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.961e+02 2.892e+02 3.419e+02 4.081e+02 7.756e+02, threshold=6.838e+02, percent-clipped=1.0 2023-03-09 10:01:33,970 INFO [train.py:898] (3/4) Epoch 18, batch 100, loss[loss=0.1955, simple_loss=0.2962, pruned_loss=0.04735, over 17266.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2556, pruned_loss=0.03952, over 1432315.78 frames. ], batch size: 78, lr: 6.37e-03, grad_scale: 8.0 2023-03-09 10:02:15,389 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=61914.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:02:32,559 INFO [train.py:898] (3/4) Epoch 18, batch 150, loss[loss=0.1762, simple_loss=0.2588, pruned_loss=0.04675, over 18409.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2558, pruned_loss=0.03995, over 1920776.13 frames. ], batch size: 48, lr: 6.37e-03, grad_scale: 8.0 2023-03-09 10:03:17,467 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.774e+02 2.985e+02 3.348e+02 3.997e+02 9.242e+02, threshold=6.695e+02, percent-clipped=1.0 2023-03-09 10:03:27,643 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=61975.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:03:32,034 INFO [train.py:898] (3/4) Epoch 18, batch 200, loss[loss=0.176, simple_loss=0.2663, pruned_loss=0.04284, over 18306.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2551, pruned_loss=0.04003, over 2281857.77 frames. ], batch size: 54, lr: 6.37e-03, grad_scale: 8.0 2023-03-09 10:03:38,354 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-09 10:04:35,245 INFO [train.py:898] (3/4) Epoch 18, batch 250, loss[loss=0.1944, simple_loss=0.2838, pruned_loss=0.05248, over 18456.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2564, pruned_loss=0.04045, over 2575600.60 frames. ], batch size: 59, lr: 6.36e-03, grad_scale: 8.0 2023-03-09 10:04:39,046 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6709, 2.8711, 4.3520, 3.5472, 2.5938, 4.5851, 3.9718, 2.9321], device='cuda:3'), covar=tensor([0.0505, 0.1376, 0.0219, 0.0473, 0.1546, 0.0191, 0.0452, 0.0967], device='cuda:3'), in_proj_covar=tensor([0.0208, 0.0235, 0.0195, 0.0155, 0.0221, 0.0208, 0.0237, 0.0199], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 10:05:19,648 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.960e+02 2.782e+02 3.298e+02 4.084e+02 9.101e+02, threshold=6.597e+02, percent-clipped=2.0 2023-03-09 10:05:34,863 INFO [train.py:898] (3/4) Epoch 18, batch 300, loss[loss=0.163, simple_loss=0.2408, pruned_loss=0.04254, over 18565.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2563, pruned_loss=0.04, over 2807658.41 frames. ], batch size: 45, lr: 6.36e-03, grad_scale: 8.0 2023-03-09 10:05:45,149 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62088.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:05:57,596 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62099.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:06:25,713 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62122.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:06:33,274 INFO [train.py:898] (3/4) Epoch 18, batch 350, loss[loss=0.165, simple_loss=0.2516, pruned_loss=0.03925, over 18372.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.2573, pruned_loss=0.04019, over 2973641.99 frames. ], batch size: 50, lr: 6.36e-03, grad_scale: 8.0 2023-03-09 10:07:02,593 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6397, 3.5813, 3.4411, 2.9869, 3.3297, 2.5241, 2.2403, 3.6394], device='cuda:3'), covar=tensor([0.0053, 0.0078, 0.0071, 0.0139, 0.0098, 0.0216, 0.0330, 0.0059], device='cuda:3'), in_proj_covar=tensor([0.0124, 0.0147, 0.0125, 0.0176, 0.0131, 0.0168, 0.0173, 0.0109], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001], device='cuda:3') 2023-03-09 10:07:17,194 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.168e+02 2.722e+02 3.288e+02 4.143e+02 7.377e+02, threshold=6.576e+02, percent-clipped=3.0 2023-03-09 10:07:32,460 INFO [train.py:898] (3/4) Epoch 18, batch 400, loss[loss=0.1763, simple_loss=0.2603, pruned_loss=0.04614, over 18426.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.258, pruned_loss=0.04031, over 3102090.09 frames. ], batch size: 48, lr: 6.36e-03, grad_scale: 8.0 2023-03-09 10:07:37,203 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62183.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:08:00,265 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1453, 2.5381, 2.3118, 2.6054, 3.2200, 3.0443, 2.8748, 2.6595], device='cuda:3'), covar=tensor([0.0184, 0.0274, 0.0586, 0.0439, 0.0216, 0.0179, 0.0417, 0.0335], device='cuda:3'), in_proj_covar=tensor([0.0132, 0.0126, 0.0159, 0.0152, 0.0119, 0.0107, 0.0150, 0.0146], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 10:08:29,840 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-09 10:08:30,218 INFO [train.py:898] (3/4) Epoch 18, batch 450, loss[loss=0.1601, simple_loss=0.2482, pruned_loss=0.03601, over 18353.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2581, pruned_loss=0.04046, over 3208771.55 frames. ], batch size: 46, lr: 6.35e-03, grad_scale: 8.0 2023-03-09 10:08:46,970 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9701, 3.7377, 5.1599, 4.6735, 3.6132, 3.2901, 4.8007, 5.3627], device='cuda:3'), covar=tensor([0.0731, 0.1678, 0.0224, 0.0347, 0.0832, 0.1058, 0.0301, 0.0283], device='cuda:3'), in_proj_covar=tensor([0.0145, 0.0269, 0.0138, 0.0177, 0.0186, 0.0187, 0.0189, 0.0184], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 10:09:15,164 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.738e+02 2.782e+02 3.170e+02 3.804e+02 6.822e+02, threshold=6.341e+02, percent-clipped=2.0 2023-03-09 10:09:18,891 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62270.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:09:22,398 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62273.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:09:29,823 INFO [train.py:898] (3/4) Epoch 18, batch 500, loss[loss=0.1614, simple_loss=0.2532, pruned_loss=0.03477, over 18584.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.257, pruned_loss=0.03986, over 3295869.76 frames. ], batch size: 54, lr: 6.35e-03, grad_scale: 8.0 2023-03-09 10:10:28,250 INFO [train.py:898] (3/4) Epoch 18, batch 550, loss[loss=0.1806, simple_loss=0.2726, pruned_loss=0.04436, over 17054.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2563, pruned_loss=0.03956, over 3370767.70 frames. ], batch size: 78, lr: 6.35e-03, grad_scale: 8.0 2023-03-09 10:10:31,833 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-09 10:10:35,034 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62334.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:11:13,350 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.182e+02 2.712e+02 3.173e+02 3.739e+02 5.690e+02, threshold=6.346e+02, percent-clipped=0.0 2023-03-09 10:11:27,638 INFO [train.py:898] (3/4) Epoch 18, batch 600, loss[loss=0.1654, simple_loss=0.2475, pruned_loss=0.0417, over 18268.00 frames. ], tot_loss[loss=0.1675, simple_loss=0.2561, pruned_loss=0.03943, over 3409985.56 frames. ], batch size: 47, lr: 6.35e-03, grad_scale: 8.0 2023-03-09 10:11:38,785 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=62388.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:11:43,010 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62391.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:11:52,593 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=62399.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:12:11,605 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-09 10:12:26,537 INFO [train.py:898] (3/4) Epoch 18, batch 650, loss[loss=0.1539, simple_loss=0.2454, pruned_loss=0.03116, over 18359.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.2558, pruned_loss=0.03933, over 3447342.08 frames. ], batch size: 55, lr: 6.34e-03, grad_scale: 8.0 2023-03-09 10:12:35,771 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=62436.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:12:48,619 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=62447.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:12:55,071 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62452.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:13:12,085 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.651e+02 2.765e+02 3.266e+02 3.962e+02 6.380e+02, threshold=6.532e+02, percent-clipped=1.0 2023-03-09 10:13:17,235 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1202, 3.9971, 5.4567, 3.2315, 4.9062, 2.7564, 3.1303, 2.0391], device='cuda:3'), covar=tensor([0.0948, 0.0786, 0.0090, 0.0659, 0.0351, 0.2405, 0.2754, 0.1985], device='cuda:3'), in_proj_covar=tensor([0.0213, 0.0235, 0.0166, 0.0187, 0.0247, 0.0261, 0.0314, 0.0226], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 10:13:24,754 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62478.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:13:25,701 INFO [train.py:898] (3/4) Epoch 18, batch 700, loss[loss=0.1581, simple_loss=0.2486, pruned_loss=0.03379, over 18360.00 frames. ], tot_loss[loss=0.167, simple_loss=0.2552, pruned_loss=0.03939, over 3481250.82 frames. ], batch size: 50, lr: 6.34e-03, grad_scale: 8.0 2023-03-09 10:14:20,533 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9254, 4.6242, 4.7050, 3.5273, 3.9012, 3.6044, 2.7646, 2.6554], device='cuda:3'), covar=tensor([0.0217, 0.0139, 0.0077, 0.0292, 0.0310, 0.0203, 0.0697, 0.0834], device='cuda:3'), in_proj_covar=tensor([0.0067, 0.0055, 0.0059, 0.0066, 0.0086, 0.0064, 0.0075, 0.0082], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 10:14:21,706 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9184, 5.3636, 5.2317, 5.3299, 4.8833, 5.2101, 4.6950, 5.2293], device='cuda:3'), covar=tensor([0.0193, 0.0245, 0.0215, 0.0381, 0.0348, 0.0209, 0.0996, 0.0261], device='cuda:3'), in_proj_covar=tensor([0.0205, 0.0250, 0.0241, 0.0305, 0.0258, 0.0255, 0.0296, 0.0244], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 10:14:22,936 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8412, 4.5341, 4.6158, 3.3839, 3.8185, 3.5443, 2.6047, 2.5762], device='cuda:3'), covar=tensor([0.0219, 0.0138, 0.0075, 0.0319, 0.0314, 0.0196, 0.0756, 0.0828], device='cuda:3'), in_proj_covar=tensor([0.0067, 0.0056, 0.0059, 0.0066, 0.0086, 0.0064, 0.0075, 0.0082], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 10:14:24,851 INFO [train.py:898] (3/4) Epoch 18, batch 750, loss[loss=0.178, simple_loss=0.275, pruned_loss=0.0405, over 18300.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2554, pruned_loss=0.03958, over 3508138.20 frames. ], batch size: 54, lr: 6.34e-03, grad_scale: 4.0 2023-03-09 10:14:44,287 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62545.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:15:10,953 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.733e+02 2.727e+02 3.281e+02 3.903e+02 8.552e+02, threshold=6.561e+02, percent-clipped=5.0 2023-03-09 10:15:13,590 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=62570.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:15:19,515 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.79 vs. limit=5.0 2023-03-09 10:15:23,456 INFO [train.py:898] (3/4) Epoch 18, batch 800, loss[loss=0.1506, simple_loss=0.2412, pruned_loss=0.02995, over 18309.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2554, pruned_loss=0.03956, over 3540765.68 frames. ], batch size: 49, lr: 6.34e-03, grad_scale: 8.0 2023-03-09 10:15:30,674 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62585.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:15:36,988 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-09 10:15:49,747 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5130, 3.5977, 5.0368, 2.9132, 4.5096, 2.5030, 3.0185, 1.9178], device='cuda:3'), covar=tensor([0.1278, 0.0887, 0.0128, 0.0711, 0.0453, 0.2580, 0.2698, 0.1946], device='cuda:3'), in_proj_covar=tensor([0.0214, 0.0236, 0.0168, 0.0187, 0.0247, 0.0261, 0.0316, 0.0227], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 10:15:57,069 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62606.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:16:11,034 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=62618.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:16:23,376 INFO [train.py:898] (3/4) Epoch 18, batch 850, loss[loss=0.1721, simple_loss=0.2635, pruned_loss=0.04033, over 18247.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2564, pruned_loss=0.03997, over 3559222.83 frames. ], batch size: 60, lr: 6.33e-03, grad_scale: 8.0 2023-03-09 10:16:23,595 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62629.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:16:44,037 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62646.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:17:00,027 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6547, 5.1700, 5.0982, 5.1463, 4.6370, 4.9900, 4.5146, 5.0215], device='cuda:3'), covar=tensor([0.0234, 0.0253, 0.0215, 0.0399, 0.0393, 0.0234, 0.1022, 0.0292], device='cuda:3'), in_proj_covar=tensor([0.0205, 0.0249, 0.0240, 0.0305, 0.0258, 0.0255, 0.0297, 0.0245], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 10:17:07,966 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62666.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:17:09,781 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.769e+02 2.717e+02 3.249e+02 3.987e+02 1.142e+03, threshold=6.498e+02, percent-clipped=2.0 2023-03-09 10:17:22,632 INFO [train.py:898] (3/4) Epoch 18, batch 900, loss[loss=0.1705, simple_loss=0.261, pruned_loss=0.04002, over 18307.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2562, pruned_loss=0.0398, over 3562989.55 frames. ], batch size: 57, lr: 6.33e-03, grad_scale: 8.0 2023-03-09 10:18:19,842 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62727.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:18:21,845 INFO [train.py:898] (3/4) Epoch 18, batch 950, loss[loss=0.1487, simple_loss=0.2332, pruned_loss=0.0321, over 17725.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.257, pruned_loss=0.03987, over 3575838.10 frames. ], batch size: 39, lr: 6.33e-03, grad_scale: 8.0 2023-03-09 10:18:43,102 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62747.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:19:07,245 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8941, 5.0299, 4.9649, 4.6305, 4.7567, 4.7108, 5.0492, 5.0234], device='cuda:3'), covar=tensor([0.0065, 0.0061, 0.0056, 0.0114, 0.0063, 0.0147, 0.0096, 0.0109], device='cuda:3'), in_proj_covar=tensor([0.0089, 0.0065, 0.0071, 0.0087, 0.0071, 0.0098, 0.0082, 0.0082], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:3') 2023-03-09 10:19:07,973 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.629e+02 2.750e+02 3.313e+02 3.832e+02 8.841e+02, threshold=6.625e+02, percent-clipped=2.0 2023-03-09 10:19:14,090 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62773.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:19:20,192 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=62778.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:19:21,040 INFO [train.py:898] (3/4) Epoch 18, batch 1000, loss[loss=0.1816, simple_loss=0.2753, pruned_loss=0.04396, over 18130.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2563, pruned_loss=0.03971, over 3580659.97 frames. ], batch size: 62, lr: 6.33e-03, grad_scale: 8.0 2023-03-09 10:20:00,599 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-09 10:20:16,447 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=62826.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:20:19,782 INFO [train.py:898] (3/4) Epoch 18, batch 1050, loss[loss=0.1878, simple_loss=0.2821, pruned_loss=0.04679, over 18567.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2554, pruned_loss=0.03942, over 3595265.58 frames. ], batch size: 54, lr: 6.32e-03, grad_scale: 8.0 2023-03-09 10:20:26,394 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62834.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:21:05,607 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.820e+02 2.730e+02 3.063e+02 3.769e+02 8.307e+02, threshold=6.126e+02, percent-clipped=1.0 2023-03-09 10:21:12,120 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1448, 5.1936, 5.4037, 5.4585, 5.0379, 5.9686, 5.5630, 5.2596], device='cuda:3'), covar=tensor([0.1105, 0.0575, 0.0698, 0.0698, 0.1372, 0.0693, 0.0646, 0.1541], device='cuda:3'), in_proj_covar=tensor([0.0341, 0.0273, 0.0296, 0.0294, 0.0322, 0.0404, 0.0264, 0.0400], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0002, 0.0004], device='cuda:3') 2023-03-09 10:21:18,728 INFO [train.py:898] (3/4) Epoch 18, batch 1100, loss[loss=0.1674, simple_loss=0.261, pruned_loss=0.03693, over 18317.00 frames. ], tot_loss[loss=0.1674, simple_loss=0.2561, pruned_loss=0.0393, over 3604585.29 frames. ], batch size: 54, lr: 6.32e-03, grad_scale: 8.0 2023-03-09 10:21:44,755 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62901.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:22:17,695 INFO [train.py:898] (3/4) Epoch 18, batch 1150, loss[loss=0.1623, simple_loss=0.2613, pruned_loss=0.03159, over 18543.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2563, pruned_loss=0.03967, over 3615122.29 frames. ], batch size: 54, lr: 6.32e-03, grad_scale: 8.0 2023-03-09 10:22:18,574 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=62929.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:22:32,078 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62941.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:22:40,515 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9319, 5.0897, 5.0712, 5.0643, 4.9304, 5.6516, 5.2509, 4.9918], device='cuda:3'), covar=tensor([0.1083, 0.0728, 0.0690, 0.0792, 0.1292, 0.0725, 0.0654, 0.1573], device='cuda:3'), in_proj_covar=tensor([0.0344, 0.0275, 0.0298, 0.0297, 0.0324, 0.0407, 0.0267, 0.0401], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0002, 0.0004], device='cuda:3') 2023-03-09 10:23:03,230 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.305e+02 2.742e+02 3.290e+02 3.886e+02 6.757e+02, threshold=6.580e+02, percent-clipped=2.0 2023-03-09 10:23:07,218 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.85 vs. limit=5.0 2023-03-09 10:23:13,014 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0872, 4.3421, 2.3053, 4.1311, 5.3323, 2.6665, 3.9121, 4.0341], device='cuda:3'), covar=tensor([0.0151, 0.1048, 0.1808, 0.0628, 0.0069, 0.1296, 0.0638, 0.0689], device='cuda:3'), in_proj_covar=tensor([0.0154, 0.0258, 0.0200, 0.0194, 0.0113, 0.0178, 0.0211, 0.0219], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0001, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 10:23:13,912 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=62977.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:23:15,925 INFO [train.py:898] (3/4) Epoch 18, batch 1200, loss[loss=0.1447, simple_loss=0.2292, pruned_loss=0.03012, over 18361.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2556, pruned_loss=0.03978, over 3609621.21 frames. ], batch size: 46, lr: 6.32e-03, grad_scale: 8.0 2023-03-09 10:23:48,133 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9370, 3.7276, 5.1471, 2.8805, 4.4571, 2.7242, 3.0896, 1.8365], device='cuda:3'), covar=tensor([0.1091, 0.0946, 0.0112, 0.0934, 0.0488, 0.2417, 0.2485, 0.2119], device='cuda:3'), in_proj_covar=tensor([0.0215, 0.0239, 0.0168, 0.0189, 0.0249, 0.0265, 0.0316, 0.0227], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 10:24:06,507 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63022.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:24:14,846 INFO [train.py:898] (3/4) Epoch 18, batch 1250, loss[loss=0.1508, simple_loss=0.2336, pruned_loss=0.03401, over 18376.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2559, pruned_loss=0.03977, over 3613428.70 frames. ], batch size: 42, lr: 6.31e-03, grad_scale: 8.0 2023-03-09 10:24:35,998 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=63047.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:24:44,749 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5109, 3.3908, 4.5307, 4.1063, 3.0518, 2.9640, 4.1307, 4.7040], device='cuda:3'), covar=tensor([0.0828, 0.1562, 0.0249, 0.0410, 0.0954, 0.1086, 0.0377, 0.0352], device='cuda:3'), in_proj_covar=tensor([0.0146, 0.0269, 0.0139, 0.0177, 0.0186, 0.0186, 0.0189, 0.0186], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 10:25:00,075 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.748e+02 2.862e+02 3.402e+02 4.233e+02 9.717e+02, threshold=6.805e+02, percent-clipped=8.0 2023-03-09 10:25:13,527 INFO [train.py:898] (3/4) Epoch 18, batch 1300, loss[loss=0.1729, simple_loss=0.2644, pruned_loss=0.04073, over 18492.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2569, pruned_loss=0.0402, over 3609235.19 frames. ], batch size: 53, lr: 6.31e-03, grad_scale: 8.0 2023-03-09 10:25:32,685 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=63095.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:26:12,351 INFO [train.py:898] (3/4) Epoch 18, batch 1350, loss[loss=0.1594, simple_loss=0.2523, pruned_loss=0.0332, over 18287.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2567, pruned_loss=0.03994, over 3601621.76 frames. ], batch size: 49, lr: 6.31e-03, grad_scale: 8.0 2023-03-09 10:26:13,172 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63129.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:26:58,492 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.671e+02 2.877e+02 3.417e+02 4.263e+02 7.577e+02, threshold=6.833e+02, percent-clipped=1.0 2023-03-09 10:27:00,089 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7665, 2.9477, 4.4201, 3.8787, 2.9883, 4.7750, 4.0935, 3.1181], device='cuda:3'), covar=tensor([0.0489, 0.1507, 0.0262, 0.0432, 0.1480, 0.0186, 0.0488, 0.0949], device='cuda:3'), in_proj_covar=tensor([0.0209, 0.0236, 0.0197, 0.0157, 0.0225, 0.0207, 0.0241, 0.0201], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 10:27:11,800 INFO [train.py:898] (3/4) Epoch 18, batch 1400, loss[loss=0.1846, simple_loss=0.2779, pruned_loss=0.04568, over 17029.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2571, pruned_loss=0.03985, over 3592787.79 frames. ], batch size: 78, lr: 6.31e-03, grad_scale: 8.0 2023-03-09 10:27:39,028 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=63201.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:28:02,581 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63221.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:28:11,977 INFO [train.py:898] (3/4) Epoch 18, batch 1450, loss[loss=0.1571, simple_loss=0.2435, pruned_loss=0.03536, over 18497.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2564, pruned_loss=0.03975, over 3591870.54 frames. ], batch size: 47, lr: 6.30e-03, grad_scale: 8.0 2023-03-09 10:28:26,302 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=63241.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:28:35,860 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=63249.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:28:57,797 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.907e+02 2.745e+02 3.338e+02 4.107e+02 1.231e+03, threshold=6.676e+02, percent-clipped=2.0 2023-03-09 10:29:10,024 INFO [train.py:898] (3/4) Epoch 18, batch 1500, loss[loss=0.1729, simple_loss=0.2633, pruned_loss=0.04125, over 18212.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2566, pruned_loss=0.03977, over 3590190.95 frames. ], batch size: 60, lr: 6.30e-03, grad_scale: 8.0 2023-03-09 10:29:14,275 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63282.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:29:22,577 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=63289.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:29:31,421 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1260, 4.2984, 2.6487, 4.2655, 5.3564, 2.6370, 4.0694, 4.2177], device='cuda:3'), covar=tensor([0.0140, 0.1135, 0.1477, 0.0558, 0.0072, 0.1156, 0.0557, 0.0638], device='cuda:3'), in_proj_covar=tensor([0.0155, 0.0260, 0.0201, 0.0195, 0.0114, 0.0180, 0.0213, 0.0221], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 10:30:01,218 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=63322.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:30:09,225 INFO [train.py:898] (3/4) Epoch 18, batch 1550, loss[loss=0.1537, simple_loss=0.2438, pruned_loss=0.0318, over 18495.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2565, pruned_loss=0.03955, over 3596077.72 frames. ], batch size: 47, lr: 6.30e-03, grad_scale: 8.0 2023-03-09 10:30:55,757 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.939e+02 2.883e+02 3.563e+02 4.171e+02 7.864e+02, threshold=7.125e+02, percent-clipped=2.0 2023-03-09 10:30:58,246 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=63370.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:31:08,395 INFO [train.py:898] (3/4) Epoch 18, batch 1600, loss[loss=0.1815, simple_loss=0.2749, pruned_loss=0.04402, over 18273.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2565, pruned_loss=0.03977, over 3591966.50 frames. ], batch size: 60, lr: 6.30e-03, grad_scale: 8.0 2023-03-09 10:31:58,086 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63420.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:32:00,402 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63422.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:32:08,057 INFO [train.py:898] (3/4) Epoch 18, batch 1650, loss[loss=0.1632, simple_loss=0.2518, pruned_loss=0.03728, over 18545.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2566, pruned_loss=0.03969, over 3603351.71 frames. ], batch size: 49, lr: 6.29e-03, grad_scale: 8.0 2023-03-09 10:32:08,357 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=63429.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:32:54,727 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.082e+02 2.837e+02 3.509e+02 4.099e+02 8.239e+02, threshold=7.017e+02, percent-clipped=1.0 2023-03-09 10:32:57,779 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-09 10:33:05,402 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=63477.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:33:07,399 INFO [train.py:898] (3/4) Epoch 18, batch 1700, loss[loss=0.1981, simple_loss=0.2801, pruned_loss=0.05802, over 17202.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2563, pruned_loss=0.03999, over 3601231.00 frames. ], batch size: 78, lr: 6.29e-03, grad_scale: 8.0 2023-03-09 10:33:10,132 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63481.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:33:12,292 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63483.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:34:06,916 INFO [train.py:898] (3/4) Epoch 18, batch 1750, loss[loss=0.1618, simple_loss=0.2539, pruned_loss=0.03488, over 18297.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2567, pruned_loss=0.03982, over 3597318.51 frames. ], batch size: 49, lr: 6.29e-03, grad_scale: 8.0 2023-03-09 10:34:26,227 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.63 vs. limit=2.0 2023-03-09 10:34:52,617 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.976e+02 2.714e+02 3.242e+02 3.831e+02 6.997e+02, threshold=6.484e+02, percent-clipped=0.0 2023-03-09 10:34:54,143 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63569.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:35:03,628 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63577.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:35:05,743 INFO [train.py:898] (3/4) Epoch 18, batch 1800, loss[loss=0.15, simple_loss=0.2349, pruned_loss=0.03254, over 18372.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.2571, pruned_loss=0.04025, over 3568657.28 frames. ], batch size: 46, lr: 6.29e-03, grad_scale: 8.0 2023-03-09 10:35:13,931 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63586.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:36:05,193 INFO [train.py:898] (3/4) Epoch 18, batch 1850, loss[loss=0.1538, simple_loss=0.2461, pruned_loss=0.03078, over 18308.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2567, pruned_loss=0.03982, over 3582755.81 frames. ], batch size: 54, lr: 6.28e-03, grad_scale: 8.0 2023-03-09 10:36:06,681 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63630.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:36:14,579 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9235, 4.1617, 4.1463, 4.2222, 3.8179, 4.1016, 3.7870, 4.1136], device='cuda:3'), covar=tensor([0.0251, 0.0368, 0.0258, 0.0430, 0.0350, 0.0248, 0.0871, 0.0317], device='cuda:3'), in_proj_covar=tensor([0.0207, 0.0251, 0.0242, 0.0309, 0.0262, 0.0257, 0.0298, 0.0249], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 10:36:26,805 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63647.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:36:51,543 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.913e+02 2.867e+02 3.371e+02 3.791e+02 7.715e+02, threshold=6.743e+02, percent-clipped=3.0 2023-03-09 10:37:04,240 INFO [train.py:898] (3/4) Epoch 18, batch 1900, loss[loss=0.167, simple_loss=0.2575, pruned_loss=0.03821, over 18388.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2568, pruned_loss=0.03992, over 3577395.66 frames. ], batch size: 52, lr: 6.28e-03, grad_scale: 8.0 2023-03-09 10:37:19,031 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5289, 3.4329, 4.7296, 4.2356, 3.1778, 2.9350, 4.2104, 4.9679], device='cuda:3'), covar=tensor([0.0920, 0.1653, 0.0186, 0.0403, 0.0945, 0.1166, 0.0412, 0.0243], device='cuda:3'), in_proj_covar=tensor([0.0146, 0.0272, 0.0140, 0.0178, 0.0188, 0.0187, 0.0191, 0.0187], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 10:37:32,362 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.39 vs. limit=5.0 2023-03-09 10:38:02,788 INFO [train.py:898] (3/4) Epoch 18, batch 1950, loss[loss=0.1628, simple_loss=0.2446, pruned_loss=0.04048, over 18423.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.257, pruned_loss=0.04041, over 3565224.72 frames. ], batch size: 43, lr: 6.28e-03, grad_scale: 8.0 2023-03-09 10:38:19,608 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0331, 4.7002, 4.7742, 3.6629, 3.9102, 3.6472, 2.8432, 2.3696], device='cuda:3'), covar=tensor([0.0183, 0.0133, 0.0065, 0.0238, 0.0299, 0.0200, 0.0606, 0.0849], device='cuda:3'), in_proj_covar=tensor([0.0067, 0.0056, 0.0059, 0.0066, 0.0086, 0.0064, 0.0075, 0.0082], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 10:38:50,311 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.821e+02 2.688e+02 3.189e+02 3.862e+02 9.985e+02, threshold=6.379e+02, percent-clipped=3.0 2023-03-09 10:38:58,720 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4908, 2.7028, 2.3361, 2.8796, 3.5934, 3.4644, 3.0601, 2.8229], device='cuda:3'), covar=tensor([0.0213, 0.0324, 0.0636, 0.0372, 0.0180, 0.0177, 0.0374, 0.0398], device='cuda:3'), in_proj_covar=tensor([0.0133, 0.0130, 0.0162, 0.0151, 0.0124, 0.0110, 0.0151, 0.0151], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 10:38:59,652 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63776.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:39:02,027 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63778.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:39:02,973 INFO [train.py:898] (3/4) Epoch 18, batch 2000, loss[loss=0.1878, simple_loss=0.2767, pruned_loss=0.0495, over 18486.00 frames. ], tot_loss[loss=0.1692, simple_loss=0.2573, pruned_loss=0.04051, over 3567185.62 frames. ], batch size: 51, lr: 6.28e-03, grad_scale: 8.0 2023-03-09 10:39:40,067 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63810.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:40:02,093 INFO [train.py:898] (3/4) Epoch 18, batch 2050, loss[loss=0.1804, simple_loss=0.2779, pruned_loss=0.04139, over 18485.00 frames. ], tot_loss[loss=0.1692, simple_loss=0.2571, pruned_loss=0.04064, over 3566165.17 frames. ], batch size: 59, lr: 6.27e-03, grad_scale: 8.0 2023-03-09 10:40:14,338 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8271, 3.8261, 3.7174, 3.2220, 3.4752, 2.8131, 2.9210, 3.8851], device='cuda:3'), covar=tensor([0.0045, 0.0075, 0.0062, 0.0119, 0.0091, 0.0185, 0.0183, 0.0055], device='cuda:3'), in_proj_covar=tensor([0.0127, 0.0151, 0.0127, 0.0179, 0.0133, 0.0171, 0.0176, 0.0112], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 10:40:21,088 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63845.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:40:48,569 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.076e+02 2.804e+02 3.273e+02 4.132e+02 7.162e+02, threshold=6.546e+02, percent-clipped=3.0 2023-03-09 10:40:53,070 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63871.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:40:59,909 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=63877.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:41:01,907 INFO [train.py:898] (3/4) Epoch 18, batch 2100, loss[loss=0.1761, simple_loss=0.2753, pruned_loss=0.0384, over 18394.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2571, pruned_loss=0.04053, over 3571885.28 frames. ], batch size: 52, lr: 6.27e-03, grad_scale: 8.0 2023-03-09 10:41:31,216 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2718, 5.1707, 5.3863, 5.4640, 5.2238, 5.9841, 5.6274, 5.2306], device='cuda:3'), covar=tensor([0.1097, 0.0565, 0.0800, 0.0777, 0.1348, 0.0726, 0.0729, 0.1775], device='cuda:3'), in_proj_covar=tensor([0.0348, 0.0273, 0.0298, 0.0295, 0.0323, 0.0407, 0.0271, 0.0400], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0002, 0.0004], device='cuda:3') 2023-03-09 10:41:33,672 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63906.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:41:45,780 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63916.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:41:56,470 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=63925.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:41:56,488 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63925.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:42:00,982 INFO [train.py:898] (3/4) Epoch 18, batch 2150, loss[loss=0.14, simple_loss=0.2156, pruned_loss=0.03217, over 17753.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2567, pruned_loss=0.04005, over 3585825.58 frames. ], batch size: 39, lr: 6.27e-03, grad_scale: 8.0 2023-03-09 10:42:06,158 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-09 10:42:16,526 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63942.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:42:47,035 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.000e+02 2.608e+02 3.106e+02 3.507e+02 7.149e+02, threshold=6.212e+02, percent-clipped=2.0 2023-03-09 10:42:57,368 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1151, 5.0926, 5.2928, 5.3682, 5.1035, 5.8871, 5.4820, 5.2178], device='cuda:3'), covar=tensor([0.1025, 0.0647, 0.0752, 0.0769, 0.1255, 0.0737, 0.0730, 0.1882], device='cuda:3'), in_proj_covar=tensor([0.0343, 0.0270, 0.0294, 0.0292, 0.0318, 0.0402, 0.0267, 0.0396], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0002, 0.0004], device='cuda:3') 2023-03-09 10:42:57,516 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63977.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:42:59,803 INFO [train.py:898] (3/4) Epoch 18, batch 2200, loss[loss=0.1864, simple_loss=0.2768, pruned_loss=0.048, over 18043.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2569, pruned_loss=0.03975, over 3594789.70 frames. ], batch size: 62, lr: 6.27e-03, grad_scale: 8.0 2023-03-09 10:43:44,393 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7205, 3.7240, 4.9339, 4.3504, 3.2026, 3.0896, 4.4887, 5.2673], device='cuda:3'), covar=tensor([0.0775, 0.1413, 0.0218, 0.0412, 0.0945, 0.1104, 0.0353, 0.0184], device='cuda:3'), in_proj_covar=tensor([0.0146, 0.0270, 0.0141, 0.0180, 0.0187, 0.0186, 0.0191, 0.0186], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 10:43:51,596 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3010, 5.8819, 5.4412, 5.6399, 5.4660, 5.2899, 5.9009, 5.8584], device='cuda:3'), covar=tensor([0.1223, 0.0737, 0.0453, 0.0764, 0.1472, 0.0689, 0.0586, 0.0691], device='cuda:3'), in_proj_covar=tensor([0.0598, 0.0506, 0.0369, 0.0538, 0.0734, 0.0535, 0.0724, 0.0551], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 10:43:53,839 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3791, 5.9250, 5.4576, 5.6889, 5.5283, 5.3290, 5.9552, 5.9149], device='cuda:3'), covar=tensor([0.1180, 0.0703, 0.0420, 0.0743, 0.1339, 0.0685, 0.0590, 0.0675], device='cuda:3'), in_proj_covar=tensor([0.0597, 0.0506, 0.0369, 0.0537, 0.0734, 0.0534, 0.0723, 0.0551], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 10:43:56,200 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64023.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:44:02,721 INFO [train.py:898] (3/4) Epoch 18, batch 2250, loss[loss=0.1598, simple_loss=0.2518, pruned_loss=0.03394, over 18356.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2568, pruned_loss=0.0397, over 3593527.96 frames. ], batch size: 50, lr: 6.26e-03, grad_scale: 8.0 2023-03-09 10:44:15,035 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9102, 5.3020, 2.8004, 5.0980, 4.9836, 5.3106, 5.0777, 2.7427], device='cuda:3'), covar=tensor([0.0196, 0.0058, 0.0724, 0.0080, 0.0069, 0.0060, 0.0090, 0.0956], device='cuda:3'), in_proj_covar=tensor([0.0085, 0.0078, 0.0094, 0.0091, 0.0083, 0.0073, 0.0083, 0.0095], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 10:44:47,982 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.732e+02 2.740e+02 3.229e+02 3.665e+02 1.396e+03, threshold=6.458e+02, percent-clipped=4.0 2023-03-09 10:44:57,968 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64076.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:45:00,332 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64078.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:45:01,208 INFO [train.py:898] (3/4) Epoch 18, batch 2300, loss[loss=0.1727, simple_loss=0.2683, pruned_loss=0.03855, over 18482.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2574, pruned_loss=0.04, over 3600062.64 frames. ], batch size: 51, lr: 6.26e-03, grad_scale: 8.0 2023-03-09 10:45:07,902 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64084.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:45:51,552 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3599, 5.3619, 4.9775, 5.2983, 5.2679, 4.6354, 5.2035, 4.8529], device='cuda:3'), covar=tensor([0.0441, 0.0426, 0.1244, 0.0716, 0.0605, 0.0435, 0.0398, 0.1083], device='cuda:3'), in_proj_covar=tensor([0.0464, 0.0526, 0.0677, 0.0417, 0.0426, 0.0486, 0.0517, 0.0648], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 10:45:54,304 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=64124.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:45:56,502 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=64126.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:45:59,676 INFO [train.py:898] (3/4) Epoch 18, batch 2350, loss[loss=0.1737, simple_loss=0.2634, pruned_loss=0.04203, over 18227.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2581, pruned_loss=0.0404, over 3592051.67 frames. ], batch size: 60, lr: 6.26e-03, grad_scale: 8.0 2023-03-09 10:46:43,655 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64166.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:46:45,511 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.074e+02 2.798e+02 3.262e+02 4.005e+02 1.022e+03, threshold=6.524e+02, percent-clipped=3.0 2023-03-09 10:46:58,410 INFO [train.py:898] (3/4) Epoch 18, batch 2400, loss[loss=0.1372, simple_loss=0.2197, pruned_loss=0.0274, over 18085.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.2577, pruned_loss=0.04005, over 3594919.26 frames. ], batch size: 40, lr: 6.26e-03, grad_scale: 8.0 2023-03-09 10:47:24,708 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64201.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:47:53,545 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64225.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:47:58,510 INFO [train.py:898] (3/4) Epoch 18, batch 2450, loss[loss=0.1821, simple_loss=0.2742, pruned_loss=0.04497, over 18305.00 frames. ], tot_loss[loss=0.1692, simple_loss=0.2581, pruned_loss=0.04016, over 3597994.77 frames. ], batch size: 57, lr: 6.26e-03, grad_scale: 8.0 2023-03-09 10:48:02,286 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64232.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:48:14,236 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64242.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:48:44,842 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.830e+02 2.733e+02 3.243e+02 4.040e+02 1.212e+03, threshold=6.487e+02, percent-clipped=5.0 2023-03-09 10:48:49,193 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64272.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:48:50,318 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=64273.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:48:57,008 INFO [train.py:898] (3/4) Epoch 18, batch 2500, loss[loss=0.1872, simple_loss=0.281, pruned_loss=0.04672, over 17021.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.2584, pruned_loss=0.04045, over 3583653.76 frames. ], batch size: 78, lr: 6.25e-03, grad_scale: 4.0 2023-03-09 10:49:11,023 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=64290.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:49:14,756 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64293.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:49:30,972 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.14 vs. limit=5.0 2023-03-09 10:49:56,113 INFO [train.py:898] (3/4) Epoch 18, batch 2550, loss[loss=0.1579, simple_loss=0.24, pruned_loss=0.03788, over 18498.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2586, pruned_loss=0.04078, over 3571364.46 frames. ], batch size: 44, lr: 6.25e-03, grad_scale: 4.0 2023-03-09 10:50:01,688 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.6241, 6.2325, 5.6895, 5.9330, 5.7820, 5.5248, 6.2550, 6.2308], device='cuda:3'), covar=tensor([0.1140, 0.0554, 0.0391, 0.0617, 0.1264, 0.0588, 0.0454, 0.0579], device='cuda:3'), in_proj_covar=tensor([0.0599, 0.0511, 0.0370, 0.0534, 0.0736, 0.0535, 0.0722, 0.0551], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 10:50:15,828 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64345.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:50:17,027 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4647, 3.3116, 2.1320, 4.2447, 2.9157, 4.0920, 2.3738, 3.7594], device='cuda:3'), covar=tensor([0.0656, 0.0890, 0.1461, 0.0483, 0.0917, 0.0350, 0.1181, 0.0417], device='cuda:3'), in_proj_covar=tensor([0.0208, 0.0221, 0.0186, 0.0274, 0.0190, 0.0261, 0.0199, 0.0196], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 10:50:37,314 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.73 vs. limit=5.0 2023-03-09 10:50:43,214 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.795e+02 2.856e+02 3.469e+02 4.229e+02 8.675e+02, threshold=6.938e+02, percent-clipped=2.0 2023-03-09 10:50:54,976 INFO [train.py:898] (3/4) Epoch 18, batch 2600, loss[loss=0.1748, simple_loss=0.2718, pruned_loss=0.03893, over 18364.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2591, pruned_loss=0.04072, over 3559685.05 frames. ], batch size: 46, lr: 6.25e-03, grad_scale: 4.0 2023-03-09 10:50:55,165 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64379.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:51:17,990 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3963, 2.5479, 4.0682, 3.7447, 2.4795, 4.2588, 3.7915, 2.6076], device='cuda:3'), covar=tensor([0.0536, 0.1623, 0.0297, 0.0330, 0.1724, 0.0270, 0.0554, 0.1141], device='cuda:3'), in_proj_covar=tensor([0.0206, 0.0234, 0.0197, 0.0155, 0.0224, 0.0208, 0.0239, 0.0198], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 10:51:27,240 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64406.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:51:53,362 INFO [train.py:898] (3/4) Epoch 18, batch 2650, loss[loss=0.1728, simple_loss=0.2743, pruned_loss=0.03569, over 18566.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2589, pruned_loss=0.0406, over 3574729.65 frames. ], batch size: 54, lr: 6.25e-03, grad_scale: 4.0 2023-03-09 10:52:35,728 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64465.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:52:36,902 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64466.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:52:39,838 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.908e+02 2.670e+02 3.183e+02 3.846e+02 7.889e+02, threshold=6.366e+02, percent-clipped=2.0 2023-03-09 10:52:52,170 INFO [train.py:898] (3/4) Epoch 18, batch 2700, loss[loss=0.1611, simple_loss=0.2397, pruned_loss=0.04131, over 17672.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2584, pruned_loss=0.04058, over 3571403.12 frames. ], batch size: 39, lr: 6.24e-03, grad_scale: 4.0 2023-03-09 10:53:18,547 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64501.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:53:32,918 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=64514.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:53:47,568 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64526.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:53:47,642 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9254, 3.4817, 2.6626, 3.2938, 4.0373, 2.5633, 3.4007, 3.3642], device='cuda:3'), covar=tensor([0.0232, 0.0968, 0.1271, 0.0686, 0.0119, 0.1109, 0.0637, 0.0742], device='cuda:3'), in_proj_covar=tensor([0.0156, 0.0264, 0.0203, 0.0196, 0.0115, 0.0183, 0.0214, 0.0222], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 10:53:50,709 INFO [train.py:898] (3/4) Epoch 18, batch 2750, loss[loss=0.1755, simple_loss=0.2684, pruned_loss=0.04134, over 18394.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.2584, pruned_loss=0.04056, over 3574531.15 frames. ], batch size: 52, lr: 6.24e-03, grad_scale: 4.0 2023-03-09 10:54:14,698 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=64549.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:54:28,356 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.60 vs. limit=2.0 2023-03-09 10:54:37,411 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.946e+02 2.775e+02 3.277e+02 4.138e+02 9.183e+02, threshold=6.554e+02, percent-clipped=4.0 2023-03-09 10:54:41,206 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64572.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:54:49,224 INFO [train.py:898] (3/4) Epoch 18, batch 2800, loss[loss=0.1456, simple_loss=0.2265, pruned_loss=0.03233, over 18459.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2578, pruned_loss=0.04037, over 3578064.09 frames. ], batch size: 43, lr: 6.24e-03, grad_scale: 8.0 2023-03-09 10:55:00,295 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64588.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:55:01,583 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64589.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:55:15,822 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9100, 3.9568, 5.2893, 4.8340, 3.5213, 3.4206, 4.8449, 5.4576], device='cuda:3'), covar=tensor([0.0707, 0.1431, 0.0160, 0.0294, 0.0865, 0.0951, 0.0281, 0.0207], device='cuda:3'), in_proj_covar=tensor([0.0145, 0.0269, 0.0139, 0.0178, 0.0187, 0.0187, 0.0190, 0.0186], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 10:55:37,454 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=64620.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:55:47,429 INFO [train.py:898] (3/4) Epoch 18, batch 2850, loss[loss=0.1821, simple_loss=0.2732, pruned_loss=0.04549, over 18357.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.2574, pruned_loss=0.04019, over 3580879.23 frames. ], batch size: 56, lr: 6.24e-03, grad_scale: 8.0 2023-03-09 10:56:10,920 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64648.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:56:13,250 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64650.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:56:34,850 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.051e+02 2.756e+02 3.294e+02 3.793e+02 6.493e+02, threshold=6.588e+02, percent-clipped=0.0 2023-03-09 10:56:46,030 INFO [train.py:898] (3/4) Epoch 18, batch 2900, loss[loss=0.179, simple_loss=0.2819, pruned_loss=0.03807, over 18364.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.2575, pruned_loss=0.04016, over 3566475.97 frames. ], batch size: 56, lr: 6.23e-03, grad_scale: 8.0 2023-03-09 10:56:46,423 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64679.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:57:11,489 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3283, 5.9729, 5.4690, 5.6557, 5.4889, 5.2595, 5.9668, 5.9540], device='cuda:3'), covar=tensor([0.1348, 0.0761, 0.0471, 0.0808, 0.1611, 0.0869, 0.0622, 0.0724], device='cuda:3'), in_proj_covar=tensor([0.0601, 0.0514, 0.0371, 0.0536, 0.0733, 0.0535, 0.0731, 0.0554], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 10:57:12,597 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64701.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:57:13,729 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5939, 6.2111, 5.6086, 5.9220, 5.7764, 5.4791, 6.2320, 6.2008], device='cuda:3'), covar=tensor([0.1209, 0.0650, 0.0394, 0.0732, 0.1468, 0.0910, 0.0557, 0.0652], device='cuda:3'), in_proj_covar=tensor([0.0601, 0.0515, 0.0372, 0.0536, 0.0734, 0.0535, 0.0731, 0.0554], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 10:57:14,969 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64703.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:57:22,037 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64709.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:57:42,560 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=64727.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:57:44,580 INFO [train.py:898] (3/4) Epoch 18, batch 2950, loss[loss=0.1542, simple_loss=0.2416, pruned_loss=0.0334, over 18160.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2572, pruned_loss=0.03998, over 3575805.98 frames. ], batch size: 44, lr: 6.23e-03, grad_scale: 8.0 2023-03-09 10:58:01,649 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-09 10:58:04,258 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64745.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:58:12,651 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64752.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:58:26,547 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64764.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:58:31,869 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.931e+02 2.790e+02 3.324e+02 3.890e+02 1.048e+03, threshold=6.648e+02, percent-clipped=3.0 2023-03-09 10:58:43,053 INFO [train.py:898] (3/4) Epoch 18, batch 3000, loss[loss=0.1747, simple_loss=0.2708, pruned_loss=0.03928, over 18500.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.2574, pruned_loss=0.04014, over 3576175.05 frames. ], batch size: 51, lr: 6.23e-03, grad_scale: 8.0 2023-03-09 10:58:43,053 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 10:58:53,915 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6275, 3.8356, 3.7897, 3.8459, 3.5258, 3.7778, 3.4099, 3.8248], device='cuda:3'), covar=tensor([0.0328, 0.0384, 0.0285, 0.0558, 0.0399, 0.0307, 0.0973, 0.0372], device='cuda:3'), in_proj_covar=tensor([0.0205, 0.0249, 0.0238, 0.0307, 0.0257, 0.0252, 0.0293, 0.0245], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006, 0.0005], device='cuda:3') 2023-03-09 10:58:55,124 INFO [train.py:932] (3/4) Epoch 18, validation: loss=0.1513, simple_loss=0.2515, pruned_loss=0.02557, over 944034.00 frames. 2023-03-09 10:58:55,124 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 10:59:18,963 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-09 10:59:28,207 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64806.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:59:36,027 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64813.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:59:44,872 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64821.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 10:59:53,920 INFO [train.py:898] (3/4) Epoch 18, batch 3050, loss[loss=0.2305, simple_loss=0.302, pruned_loss=0.07948, over 12525.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2569, pruned_loss=0.03985, over 3578397.60 frames. ], batch size: 130, lr: 6.23e-03, grad_scale: 8.0 2023-03-09 11:00:41,792 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.847e+02 2.582e+02 2.978e+02 3.625e+02 9.287e+02, threshold=5.956e+02, percent-clipped=2.0 2023-03-09 11:00:52,642 INFO [train.py:898] (3/4) Epoch 18, batch 3100, loss[loss=0.1608, simple_loss=0.244, pruned_loss=0.03879, over 18273.00 frames. ], tot_loss[loss=0.1675, simple_loss=0.2558, pruned_loss=0.03959, over 3587045.66 frames. ], batch size: 45, lr: 6.22e-03, grad_scale: 8.0 2023-03-09 11:01:04,237 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64888.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:01:25,437 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.50 vs. limit=2.0 2023-03-09 11:01:51,079 INFO [train.py:898] (3/4) Epoch 18, batch 3150, loss[loss=0.1935, simple_loss=0.275, pruned_loss=0.05602, over 12433.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2567, pruned_loss=0.03989, over 3580975.56 frames. ], batch size: 129, lr: 6.22e-03, grad_scale: 8.0 2023-03-09 11:01:59,103 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=64936.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:02:10,212 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64945.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:02:38,246 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.946e+02 2.751e+02 3.227e+02 4.235e+02 1.394e+03, threshold=6.453e+02, percent-clipped=7.0 2023-03-09 11:02:46,572 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64976.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:02:49,649 INFO [train.py:898] (3/4) Epoch 18, batch 3200, loss[loss=0.1654, simple_loss=0.2489, pruned_loss=0.04099, over 18500.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.256, pruned_loss=0.03953, over 3588426.03 frames. ], batch size: 47, lr: 6.22e-03, grad_scale: 8.0 2023-03-09 11:03:16,780 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65001.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:03:20,120 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65004.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:03:43,623 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.57 vs. limit=5.0 2023-03-09 11:03:48,371 INFO [train.py:898] (3/4) Epoch 18, batch 3250, loss[loss=0.1393, simple_loss=0.2214, pruned_loss=0.02867, over 18447.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2563, pruned_loss=0.0397, over 3595398.60 frames. ], batch size: 43, lr: 6.22e-03, grad_scale: 8.0 2023-03-09 11:03:58,118 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=65037.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 11:04:12,034 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=65049.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:04:19,626 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=65055.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:04:24,042 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65059.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:04:35,621 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.699e+02 2.719e+02 3.291e+02 4.073e+02 7.543e+02, threshold=6.581e+02, percent-clipped=1.0 2023-03-09 11:04:46,974 INFO [train.py:898] (3/4) Epoch 18, batch 3300, loss[loss=0.1656, simple_loss=0.2596, pruned_loss=0.03584, over 18372.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2561, pruned_loss=0.0395, over 3605224.14 frames. ], batch size: 50, lr: 6.21e-03, grad_scale: 4.0 2023-03-09 11:04:54,362 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-09 11:05:13,453 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65101.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:05:21,862 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65108.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:05:31,505 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5111, 3.3184, 2.0711, 4.2752, 3.0261, 4.1422, 2.2399, 3.7345], device='cuda:3'), covar=tensor([0.0601, 0.0798, 0.1383, 0.0447, 0.0803, 0.0264, 0.1228, 0.0426], device='cuda:3'), in_proj_covar=tensor([0.0208, 0.0220, 0.0186, 0.0271, 0.0188, 0.0259, 0.0198, 0.0195], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 11:05:31,532 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=65116.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:05:37,738 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65121.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:05:46,559 INFO [train.py:898] (3/4) Epoch 18, batch 3350, loss[loss=0.17, simple_loss=0.269, pruned_loss=0.03547, over 18490.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2556, pruned_loss=0.03935, over 3602891.20 frames. ], batch size: 53, lr: 6.21e-03, grad_scale: 4.0 2023-03-09 11:05:53,935 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-09 11:06:04,874 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9032, 4.6941, 4.6844, 3.4948, 3.8815, 3.4492, 2.8243, 2.6029], device='cuda:3'), covar=tensor([0.0228, 0.0148, 0.0078, 0.0296, 0.0296, 0.0236, 0.0655, 0.0836], device='cuda:3'), in_proj_covar=tensor([0.0068, 0.0058, 0.0061, 0.0068, 0.0087, 0.0066, 0.0076, 0.0084], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 11:06:33,171 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=65169.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:06:34,022 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.758e+02 2.720e+02 3.418e+02 4.125e+02 2.086e+03, threshold=6.835e+02, percent-clipped=5.0 2023-03-09 11:06:44,724 INFO [train.py:898] (3/4) Epoch 18, batch 3400, loss[loss=0.1577, simple_loss=0.2489, pruned_loss=0.03322, over 18363.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2569, pruned_loss=0.03994, over 3602430.14 frames. ], batch size: 50, lr: 6.21e-03, grad_scale: 4.0 2023-03-09 11:06:53,982 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5891, 3.4841, 4.7268, 4.2043, 3.2048, 2.9526, 4.1479, 4.8965], device='cuda:3'), covar=tensor([0.0805, 0.1435, 0.0190, 0.0390, 0.0893, 0.1111, 0.0414, 0.0192], device='cuda:3'), in_proj_covar=tensor([0.0145, 0.0270, 0.0140, 0.0179, 0.0187, 0.0187, 0.0191, 0.0187], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 11:07:43,151 INFO [train.py:898] (3/4) Epoch 18, batch 3450, loss[loss=0.1463, simple_loss=0.2417, pruned_loss=0.02547, over 18373.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2565, pruned_loss=0.03986, over 3591780.14 frames. ], batch size: 50, lr: 6.21e-03, grad_scale: 4.0 2023-03-09 11:08:01,789 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65245.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:08:31,046 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.103e+02 2.885e+02 3.344e+02 3.937e+02 8.073e+02, threshold=6.688e+02, percent-clipped=3.0 2023-03-09 11:08:41,987 INFO [train.py:898] (3/4) Epoch 18, batch 3500, loss[loss=0.1807, simple_loss=0.2725, pruned_loss=0.04448, over 18302.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2566, pruned_loss=0.03987, over 3590813.80 frames. ], batch size: 54, lr: 6.20e-03, grad_scale: 4.0 2023-03-09 11:08:49,069 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=65285.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:08:58,190 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=65293.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:09:10,871 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65304.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:09:37,557 INFO [train.py:898] (3/4) Epoch 18, batch 3550, loss[loss=0.1687, simple_loss=0.2649, pruned_loss=0.03626, over 18277.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.257, pruned_loss=0.03993, over 3600832.29 frames. ], batch size: 49, lr: 6.20e-03, grad_scale: 4.0 2023-03-09 11:09:41,609 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65332.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 11:09:57,221 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=65346.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:10:03,366 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=65352.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:10:10,934 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65359.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:10:16,544 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6442, 3.5054, 2.2017, 4.4616, 3.2449, 4.4021, 2.7105, 4.0275], device='cuda:3'), covar=tensor([0.0587, 0.0732, 0.1464, 0.0415, 0.0797, 0.0336, 0.1068, 0.0392], device='cuda:3'), in_proj_covar=tensor([0.0208, 0.0222, 0.0188, 0.0273, 0.0189, 0.0260, 0.0199, 0.0195], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 11:10:23,168 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.008e+02 2.567e+02 2.931e+02 3.482e+02 6.619e+02, threshold=5.862e+02, percent-clipped=0.0 2023-03-09 11:10:33,170 INFO [train.py:898] (3/4) Epoch 18, batch 3600, loss[loss=0.184, simple_loss=0.2712, pruned_loss=0.04839, over 17794.00 frames. ], tot_loss[loss=0.169, simple_loss=0.2575, pruned_loss=0.0402, over 3592143.89 frames. ], batch size: 70, lr: 6.20e-03, grad_scale: 8.0 2023-03-09 11:10:57,616 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65401.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:11:03,663 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=65407.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:11:04,719 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65408.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:11:07,308 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65411.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:11:38,114 INFO [train.py:898] (3/4) Epoch 19, batch 0, loss[loss=0.18, simple_loss=0.266, pruned_loss=0.04702, over 18397.00 frames. ], tot_loss[loss=0.18, simple_loss=0.266, pruned_loss=0.04702, over 18397.00 frames. ], batch size: 52, lr: 6.03e-03, grad_scale: 8.0 2023-03-09 11:11:38,115 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 11:11:49,828 INFO [train.py:932] (3/4) Epoch 19, validation: loss=0.1513, simple_loss=0.2518, pruned_loss=0.02538, over 944034.00 frames. 2023-03-09 11:11:49,829 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 11:12:03,900 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6876, 2.6601, 4.3442, 3.9117, 2.4794, 4.5937, 3.9390, 2.6257], device='cuda:3'), covar=tensor([0.0440, 0.1630, 0.0230, 0.0320, 0.1684, 0.0208, 0.0538, 0.1153], device='cuda:3'), in_proj_covar=tensor([0.0198, 0.0226, 0.0193, 0.0151, 0.0215, 0.0198, 0.0233, 0.0191], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 11:12:22,127 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7418, 3.1333, 2.5111, 3.1390, 3.8019, 3.6308, 3.3771, 3.0955], device='cuda:3'), covar=tensor([0.0188, 0.0209, 0.0618, 0.0294, 0.0137, 0.0154, 0.0264, 0.0312], device='cuda:3'), in_proj_covar=tensor([0.0133, 0.0130, 0.0161, 0.0152, 0.0124, 0.0110, 0.0149, 0.0152], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 11:12:32,076 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=65449.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:12:40,044 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=65456.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:12:47,620 INFO [train.py:898] (3/4) Epoch 19, batch 50, loss[loss=0.1733, simple_loss=0.2617, pruned_loss=0.04246, over 18105.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2561, pruned_loss=0.04027, over 820491.65 frames. ], batch size: 62, lr: 6.03e-03, grad_scale: 8.0 2023-03-09 11:12:55,631 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.183e+02 3.168e+02 3.745e+02 4.486e+02 9.575e+02, threshold=7.490e+02, percent-clipped=6.0 2023-03-09 11:13:19,056 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8707, 5.3843, 5.3591, 5.3723, 4.8489, 5.3093, 4.7085, 5.2689], device='cuda:3'), covar=tensor([0.0244, 0.0270, 0.0176, 0.0353, 0.0406, 0.0221, 0.1096, 0.0314], device='cuda:3'), in_proj_covar=tensor([0.0206, 0.0253, 0.0241, 0.0312, 0.0261, 0.0258, 0.0299, 0.0250], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 11:13:46,483 INFO [train.py:898] (3/4) Epoch 19, batch 100, loss[loss=0.1373, simple_loss=0.2206, pruned_loss=0.02706, over 18443.00 frames. ], tot_loss[loss=0.169, simple_loss=0.2565, pruned_loss=0.0407, over 1436193.60 frames. ], batch size: 43, lr: 6.03e-03, grad_scale: 8.0 2023-03-09 11:13:54,890 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5992, 2.3508, 2.6162, 2.6593, 3.2532, 5.0331, 4.8300, 3.5616], device='cuda:3'), covar=tensor([0.1781, 0.2407, 0.3062, 0.1941, 0.2374, 0.0187, 0.0345, 0.0860], device='cuda:3'), in_proj_covar=tensor([0.0291, 0.0342, 0.0373, 0.0274, 0.0389, 0.0234, 0.0294, 0.0249], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:3') 2023-03-09 11:14:08,291 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=65532.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 11:14:44,712 INFO [train.py:898] (3/4) Epoch 19, batch 150, loss[loss=0.1674, simple_loss=0.2579, pruned_loss=0.03847, over 18361.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2561, pruned_loss=0.03973, over 1921518.92 frames. ], batch size: 46, lr: 6.02e-03, grad_scale: 4.0 2023-03-09 11:14:53,878 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.910e+02 2.761e+02 3.150e+02 3.838e+02 8.694e+02, threshold=6.300e+02, percent-clipped=3.0 2023-03-09 11:15:20,757 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=65593.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 11:15:44,547 INFO [train.py:898] (3/4) Epoch 19, batch 200, loss[loss=0.1626, simple_loss=0.2571, pruned_loss=0.03401, over 18401.00 frames. ], tot_loss[loss=0.169, simple_loss=0.2579, pruned_loss=0.04004, over 2285455.49 frames. ], batch size: 52, lr: 6.02e-03, grad_scale: 4.0 2023-03-09 11:16:06,641 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65632.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:16:17,263 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65641.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:16:32,862 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.2760, 4.6867, 4.6549, 4.7070, 4.2976, 4.6211, 4.1941, 4.5810], device='cuda:3'), covar=tensor([0.0245, 0.0299, 0.0223, 0.0421, 0.0355, 0.0238, 0.0904, 0.0303], device='cuda:3'), in_proj_covar=tensor([0.0208, 0.0254, 0.0244, 0.0314, 0.0262, 0.0260, 0.0300, 0.0252], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 11:16:36,715 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9206, 5.0997, 5.1377, 5.2421, 4.9178, 5.7289, 5.2949, 5.0825], device='cuda:3'), covar=tensor([0.1135, 0.0588, 0.0720, 0.0815, 0.1392, 0.0713, 0.0787, 0.1487], device='cuda:3'), in_proj_covar=tensor([0.0348, 0.0274, 0.0298, 0.0296, 0.0322, 0.0406, 0.0271, 0.0396], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0002, 0.0004], device='cuda:3') 2023-03-09 11:16:43,289 INFO [train.py:898] (3/4) Epoch 19, batch 250, loss[loss=0.1679, simple_loss=0.26, pruned_loss=0.0379, over 18489.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2577, pruned_loss=0.04042, over 2572886.67 frames. ], batch size: 53, lr: 6.02e-03, grad_scale: 4.0 2023-03-09 11:16:52,483 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.860e+02 2.820e+02 3.379e+02 4.029e+02 8.035e+02, threshold=6.759e+02, percent-clipped=4.0 2023-03-09 11:17:02,986 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=65680.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:17:38,368 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7869, 3.7308, 3.5215, 3.1835, 3.4974, 2.8395, 2.9505, 3.7783], device='cuda:3'), covar=tensor([0.0055, 0.0077, 0.0079, 0.0122, 0.0082, 0.0181, 0.0172, 0.0058], device='cuda:3'), in_proj_covar=tensor([0.0130, 0.0153, 0.0130, 0.0184, 0.0136, 0.0175, 0.0179, 0.0116], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 11:17:40,383 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65711.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:17:42,351 INFO [train.py:898] (3/4) Epoch 19, batch 300, loss[loss=0.1743, simple_loss=0.2658, pruned_loss=0.04136, over 18325.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.2579, pruned_loss=0.03995, over 2795874.43 frames. ], batch size: 54, lr: 6.02e-03, grad_scale: 4.0 2023-03-09 11:18:07,828 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.82 vs. limit=2.0 2023-03-09 11:18:35,671 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=65759.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:18:40,561 INFO [train.py:898] (3/4) Epoch 19, batch 350, loss[loss=0.1955, simple_loss=0.2825, pruned_loss=0.05424, over 17737.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.258, pruned_loss=0.03975, over 2976610.88 frames. ], batch size: 70, lr: 6.01e-03, grad_scale: 4.0 2023-03-09 11:18:49,499 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.996e+02 2.649e+02 2.976e+02 3.551e+02 5.920e+02, threshold=5.952e+02, percent-clipped=0.0 2023-03-09 11:19:38,373 INFO [train.py:898] (3/4) Epoch 19, batch 400, loss[loss=0.1657, simple_loss=0.2595, pruned_loss=0.03595, over 18616.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2577, pruned_loss=0.03969, over 3116179.87 frames. ], batch size: 52, lr: 6.01e-03, grad_scale: 8.0 2023-03-09 11:20:07,777 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=65838.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 11:20:08,081 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-09 11:20:21,368 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0268, 5.3234, 2.8045, 5.1733, 5.0519, 5.3231, 5.1543, 2.6841], device='cuda:3'), covar=tensor([0.0188, 0.0077, 0.0761, 0.0086, 0.0074, 0.0082, 0.0098, 0.1061], device='cuda:3'), in_proj_covar=tensor([0.0084, 0.0078, 0.0093, 0.0091, 0.0081, 0.0072, 0.0083, 0.0094], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 11:20:37,149 INFO [train.py:898] (3/4) Epoch 19, batch 450, loss[loss=0.1717, simple_loss=0.2514, pruned_loss=0.046, over 18479.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2565, pruned_loss=0.03938, over 3221266.23 frames. ], batch size: 44, lr: 6.01e-03, grad_scale: 8.0 2023-03-09 11:20:45,957 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.6680, 6.2170, 5.5955, 6.0389, 5.8296, 5.6266, 6.2853, 6.2338], device='cuda:3'), covar=tensor([0.1141, 0.0633, 0.0361, 0.0623, 0.1258, 0.0654, 0.0489, 0.0565], device='cuda:3'), in_proj_covar=tensor([0.0595, 0.0512, 0.0368, 0.0535, 0.0729, 0.0532, 0.0726, 0.0547], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 11:20:46,703 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.864e+02 3.029e+02 3.525e+02 4.091e+02 6.836e+02, threshold=7.049e+02, percent-clipped=6.0 2023-03-09 11:21:06,053 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65888.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 11:21:18,787 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=65899.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 11:21:35,187 INFO [train.py:898] (3/4) Epoch 19, batch 500, loss[loss=0.1761, simple_loss=0.2705, pruned_loss=0.04083, over 16983.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2572, pruned_loss=0.03964, over 3304594.13 frames. ], batch size: 78, lr: 6.01e-03, grad_scale: 8.0 2023-03-09 11:22:01,540 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5111, 4.0965, 4.0903, 3.1578, 3.5157, 3.2467, 2.4474, 2.1908], device='cuda:3'), covar=tensor([0.0252, 0.0155, 0.0098, 0.0330, 0.0362, 0.0223, 0.0750, 0.0915], device='cuda:3'), in_proj_covar=tensor([0.0068, 0.0057, 0.0060, 0.0067, 0.0087, 0.0065, 0.0075, 0.0083], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 11:22:08,484 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65941.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:22:33,523 INFO [train.py:898] (3/4) Epoch 19, batch 550, loss[loss=0.1665, simple_loss=0.2603, pruned_loss=0.03634, over 17788.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2571, pruned_loss=0.03961, over 3358396.62 frames. ], batch size: 70, lr: 6.01e-03, grad_scale: 8.0 2023-03-09 11:22:42,910 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.825e+02 2.542e+02 3.090e+02 3.456e+02 5.520e+02, threshold=6.179e+02, percent-clipped=0.0 2023-03-09 11:23:04,307 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=65989.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:23:35,303 INFO [train.py:898] (3/4) Epoch 19, batch 600, loss[loss=0.1572, simple_loss=0.2415, pruned_loss=0.03646, over 18247.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2562, pruned_loss=0.03918, over 3416845.56 frames. ], batch size: 45, lr: 6.00e-03, grad_scale: 8.0 2023-03-09 11:23:55,793 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3915, 5.1785, 5.6458, 5.6924, 5.3076, 6.1785, 5.8665, 5.5292], device='cuda:3'), covar=tensor([0.1158, 0.0624, 0.0721, 0.0694, 0.1551, 0.0762, 0.0649, 0.1468], device='cuda:3'), in_proj_covar=tensor([0.0351, 0.0274, 0.0301, 0.0300, 0.0325, 0.0413, 0.0273, 0.0399], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0002, 0.0004], device='cuda:3') 2023-03-09 11:23:58,213 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6515, 4.1890, 4.1747, 3.1642, 3.5267, 3.2900, 2.4889, 2.2480], device='cuda:3'), covar=tensor([0.0233, 0.0179, 0.0106, 0.0363, 0.0382, 0.0241, 0.0791, 0.0933], device='cuda:3'), in_proj_covar=tensor([0.0068, 0.0058, 0.0060, 0.0067, 0.0087, 0.0066, 0.0075, 0.0083], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 11:24:15,335 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0331, 5.4244, 2.9160, 5.2699, 5.2240, 5.4543, 5.3597, 2.8627], device='cuda:3'), covar=tensor([0.0177, 0.0049, 0.0660, 0.0072, 0.0056, 0.0056, 0.0063, 0.0942], device='cuda:3'), in_proj_covar=tensor([0.0086, 0.0080, 0.0094, 0.0093, 0.0083, 0.0073, 0.0084, 0.0095], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 11:24:33,508 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-09 11:24:34,016 INFO [train.py:898] (3/4) Epoch 19, batch 650, loss[loss=0.1433, simple_loss=0.2288, pruned_loss=0.02892, over 18471.00 frames. ], tot_loss[loss=0.1665, simple_loss=0.2553, pruned_loss=0.03888, over 3466219.16 frames. ], batch size: 43, lr: 6.00e-03, grad_scale: 8.0 2023-03-09 11:24:41,919 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0079, 5.5489, 5.1462, 5.3274, 5.1769, 5.0330, 5.6297, 5.5496], device='cuda:3'), covar=tensor([0.1189, 0.0698, 0.0691, 0.0701, 0.1372, 0.0696, 0.0558, 0.0645], device='cuda:3'), in_proj_covar=tensor([0.0602, 0.0519, 0.0373, 0.0543, 0.0738, 0.0538, 0.0733, 0.0552], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 11:24:42,676 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.920e+02 2.591e+02 2.961e+02 3.650e+02 8.631e+02, threshold=5.923e+02, percent-clipped=2.0 2023-03-09 11:25:32,786 INFO [train.py:898] (3/4) Epoch 19, batch 700, loss[loss=0.1477, simple_loss=0.2296, pruned_loss=0.03283, over 18499.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2568, pruned_loss=0.03938, over 3492611.25 frames. ], batch size: 44, lr: 6.00e-03, grad_scale: 8.0 2023-03-09 11:25:52,053 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.49 vs. limit=2.0 2023-03-09 11:26:09,941 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=66144.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:26:24,717 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.70 vs. limit=5.0 2023-03-09 11:26:27,948 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=66160.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 11:26:31,032 INFO [train.py:898] (3/4) Epoch 19, batch 750, loss[loss=0.1806, simple_loss=0.271, pruned_loss=0.04512, over 17956.00 frames. ], tot_loss[loss=0.1674, simple_loss=0.2563, pruned_loss=0.03929, over 3510898.06 frames. ], batch size: 65, lr: 6.00e-03, grad_scale: 8.0 2023-03-09 11:26:40,056 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.858e+02 2.715e+02 3.373e+02 4.119e+02 8.892e+02, threshold=6.747e+02, percent-clipped=6.0 2023-03-09 11:27:02,108 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=66188.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 11:27:08,747 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=66194.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 11:27:21,511 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=66205.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 11:27:30,196 INFO [train.py:898] (3/4) Epoch 19, batch 800, loss[loss=0.1923, simple_loss=0.2754, pruned_loss=0.05455, over 12700.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2569, pruned_loss=0.03962, over 3517928.11 frames. ], batch size: 129, lr: 5.99e-03, grad_scale: 8.0 2023-03-09 11:27:39,602 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=66221.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 11:27:58,058 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=66236.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 11:28:23,813 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=66258.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:28:29,020 INFO [train.py:898] (3/4) Epoch 19, batch 850, loss[loss=0.1573, simple_loss=0.245, pruned_loss=0.03482, over 18385.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2571, pruned_loss=0.03951, over 3529356.57 frames. ], batch size: 50, lr: 5.99e-03, grad_scale: 8.0 2023-03-09 11:28:38,323 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.829e+02 2.659e+02 3.037e+02 3.502e+02 7.094e+02, threshold=6.073e+02, percent-clipped=1.0 2023-03-09 11:29:16,427 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5076, 5.4693, 5.0919, 5.4412, 5.3843, 4.7827, 5.3166, 5.0012], device='cuda:3'), covar=tensor([0.0386, 0.0439, 0.1275, 0.0714, 0.0573, 0.0426, 0.0446, 0.1069], device='cuda:3'), in_proj_covar=tensor([0.0463, 0.0538, 0.0682, 0.0418, 0.0431, 0.0489, 0.0524, 0.0659], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 11:29:27,612 INFO [train.py:898] (3/4) Epoch 19, batch 900, loss[loss=0.1482, simple_loss=0.2389, pruned_loss=0.02877, over 18511.00 frames. ], tot_loss[loss=0.1675, simple_loss=0.2567, pruned_loss=0.03919, over 3546709.34 frames. ], batch size: 47, lr: 5.99e-03, grad_scale: 8.0 2023-03-09 11:29:34,914 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=66319.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:29:41,866 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5706, 2.7773, 4.2740, 3.6062, 2.4320, 4.4529, 3.8822, 2.8528], device='cuda:3'), covar=tensor([0.0504, 0.1506, 0.0251, 0.0444, 0.1768, 0.0242, 0.0530, 0.1014], device='cuda:3'), in_proj_covar=tensor([0.0203, 0.0233, 0.0198, 0.0157, 0.0222, 0.0205, 0.0239, 0.0197], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 11:30:26,353 INFO [train.py:898] (3/4) Epoch 19, batch 950, loss[loss=0.1807, simple_loss=0.2736, pruned_loss=0.04387, over 18460.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2569, pruned_loss=0.03939, over 3556733.88 frames. ], batch size: 59, lr: 5.99e-03, grad_scale: 8.0 2023-03-09 11:30:35,531 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.870e+02 2.786e+02 3.276e+02 3.823e+02 6.453e+02, threshold=6.553e+02, percent-clipped=1.0 2023-03-09 11:31:24,791 INFO [train.py:898] (3/4) Epoch 19, batch 1000, loss[loss=0.1908, simple_loss=0.2803, pruned_loss=0.05063, over 16371.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2568, pruned_loss=0.03938, over 3552147.66 frames. ], batch size: 94, lr: 5.99e-03, grad_scale: 8.0 2023-03-09 11:32:10,769 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-09 11:32:14,703 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5464, 3.4804, 2.1674, 4.4287, 3.1202, 4.3051, 2.4152, 3.9033], device='cuda:3'), covar=tensor([0.0626, 0.0804, 0.1484, 0.0437, 0.0810, 0.0341, 0.1179, 0.0419], device='cuda:3'), in_proj_covar=tensor([0.0209, 0.0221, 0.0187, 0.0274, 0.0189, 0.0259, 0.0198, 0.0196], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 11:32:23,342 INFO [train.py:898] (3/4) Epoch 19, batch 1050, loss[loss=0.178, simple_loss=0.2778, pruned_loss=0.03906, over 18411.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2567, pruned_loss=0.03932, over 3560225.98 frames. ], batch size: 52, lr: 5.98e-03, grad_scale: 8.0 2023-03-09 11:32:29,938 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-09 11:32:31,031 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-09 11:32:32,545 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.909e+02 2.933e+02 3.279e+02 4.197e+02 8.258e+02, threshold=6.558e+02, percent-clipped=3.0 2023-03-09 11:32:53,897 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.6196, 5.5358, 5.2220, 5.5330, 5.4664, 4.8359, 5.4397, 5.1310], device='cuda:3'), covar=tensor([0.0366, 0.0396, 0.1165, 0.0710, 0.0602, 0.0461, 0.0367, 0.0975], device='cuda:3'), in_proj_covar=tensor([0.0467, 0.0539, 0.0684, 0.0424, 0.0434, 0.0493, 0.0528, 0.0660], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 11:32:59,508 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=66494.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 11:33:06,976 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=66500.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 11:33:22,931 INFO [train.py:898] (3/4) Epoch 19, batch 1100, loss[loss=0.1569, simple_loss=0.2447, pruned_loss=0.03457, over 18245.00 frames. ], tot_loss[loss=0.167, simple_loss=0.2562, pruned_loss=0.03887, over 3579613.45 frames. ], batch size: 47, lr: 5.98e-03, grad_scale: 8.0 2023-03-09 11:33:26,442 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=66516.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 11:33:32,040 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=66521.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 11:33:56,116 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=66542.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 11:34:20,978 INFO [train.py:898] (3/4) Epoch 19, batch 1150, loss[loss=0.1792, simple_loss=0.2632, pruned_loss=0.04759, over 18565.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.256, pruned_loss=0.03921, over 3567079.87 frames. ], batch size: 45, lr: 5.98e-03, grad_scale: 8.0 2023-03-09 11:34:29,828 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.816e+02 2.725e+02 3.159e+02 3.763e+02 6.148e+02, threshold=6.318e+02, percent-clipped=0.0 2023-03-09 11:34:42,859 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=66582.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 11:35:14,683 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0128, 4.7518, 4.8377, 3.7567, 3.9828, 3.5686, 3.1500, 2.8274], device='cuda:3'), covar=tensor([0.0196, 0.0140, 0.0066, 0.0255, 0.0310, 0.0228, 0.0607, 0.0717], device='cuda:3'), in_proj_covar=tensor([0.0069, 0.0058, 0.0061, 0.0068, 0.0088, 0.0067, 0.0077, 0.0084], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 11:35:19,958 INFO [train.py:898] (3/4) Epoch 19, batch 1200, loss[loss=0.1561, simple_loss=0.2353, pruned_loss=0.03848, over 18571.00 frames. ], tot_loss[loss=0.1674, simple_loss=0.2562, pruned_loss=0.03934, over 3576026.06 frames. ], batch size: 45, lr: 5.98e-03, grad_scale: 8.0 2023-03-09 11:35:21,310 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=66614.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:35:26,309 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.14 vs. limit=5.0 2023-03-09 11:35:30,698 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-09 11:36:13,823 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-09 11:36:18,957 INFO [train.py:898] (3/4) Epoch 19, batch 1250, loss[loss=0.1764, simple_loss=0.2735, pruned_loss=0.03969, over 18301.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2573, pruned_loss=0.03953, over 3580008.81 frames. ], batch size: 54, lr: 5.97e-03, grad_scale: 8.0 2023-03-09 11:36:27,896 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.899e+02 2.792e+02 3.362e+02 4.146e+02 6.699e+02, threshold=6.725e+02, percent-clipped=2.0 2023-03-09 11:37:16,536 INFO [train.py:898] (3/4) Epoch 19, batch 1300, loss[loss=0.154, simple_loss=0.2337, pruned_loss=0.03714, over 18440.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2571, pruned_loss=0.03942, over 3588439.09 frames. ], batch size: 43, lr: 5.97e-03, grad_scale: 8.0 2023-03-09 11:37:37,899 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=66731.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 11:38:14,590 INFO [train.py:898] (3/4) Epoch 19, batch 1350, loss[loss=0.1523, simple_loss=0.2485, pruned_loss=0.02808, over 18407.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2568, pruned_loss=0.03936, over 3586772.47 frames. ], batch size: 52, lr: 5.97e-03, grad_scale: 8.0 2023-03-09 11:38:24,644 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.905e+02 2.817e+02 3.306e+02 3.967e+02 8.245e+02, threshold=6.612e+02, percent-clipped=2.0 2023-03-09 11:38:48,303 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=66792.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 11:38:52,747 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5690, 6.0723, 5.5099, 5.9033, 5.7167, 5.4930, 6.1924, 6.1434], device='cuda:3'), covar=tensor([0.1092, 0.0765, 0.0450, 0.0701, 0.1241, 0.0708, 0.0483, 0.0589], device='cuda:3'), in_proj_covar=tensor([0.0602, 0.0520, 0.0372, 0.0544, 0.0732, 0.0539, 0.0735, 0.0553], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 11:38:57,419 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=66800.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 11:39:12,421 INFO [train.py:898] (3/4) Epoch 19, batch 1400, loss[loss=0.1326, simple_loss=0.2164, pruned_loss=0.02444, over 18245.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2571, pruned_loss=0.03963, over 3590111.20 frames. ], batch size: 45, lr: 5.97e-03, grad_scale: 8.0 2023-03-09 11:39:16,799 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=66816.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 11:39:43,129 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.71 vs. limit=2.0 2023-03-09 11:39:54,054 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=66848.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:40:11,316 INFO [train.py:898] (3/4) Epoch 19, batch 1450, loss[loss=0.1426, simple_loss=0.229, pruned_loss=0.0281, over 18487.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2566, pruned_loss=0.03948, over 3587581.58 frames. ], batch size: 47, lr: 5.97e-03, grad_scale: 8.0 2023-03-09 11:40:12,676 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=66864.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 11:40:21,463 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.935e+02 2.720e+02 3.294e+02 4.147e+02 8.956e+02, threshold=6.588e+02, percent-clipped=4.0 2023-03-09 11:40:28,887 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=66877.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 11:41:10,289 INFO [train.py:898] (3/4) Epoch 19, batch 1500, loss[loss=0.1525, simple_loss=0.2451, pruned_loss=0.03001, over 18362.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.256, pruned_loss=0.03925, over 3582914.72 frames. ], batch size: 46, lr: 5.96e-03, grad_scale: 8.0 2023-03-09 11:41:11,682 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=66914.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:42:02,787 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=66957.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:42:08,455 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=66962.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:42:09,406 INFO [train.py:898] (3/4) Epoch 19, batch 1550, loss[loss=0.1636, simple_loss=0.2458, pruned_loss=0.04068, over 18279.00 frames. ], tot_loss[loss=0.1659, simple_loss=0.2546, pruned_loss=0.03861, over 3586388.86 frames. ], batch size: 49, lr: 5.96e-03, grad_scale: 8.0 2023-03-09 11:42:18,935 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.827e+02 2.644e+02 3.102e+02 3.654e+02 6.872e+02, threshold=6.204e+02, percent-clipped=2.0 2023-03-09 11:42:19,327 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0788, 5.1429, 5.2012, 4.8785, 4.8462, 4.9378, 5.2367, 5.1974], device='cuda:3'), covar=tensor([0.0057, 0.0055, 0.0048, 0.0098, 0.0069, 0.0139, 0.0076, 0.0092], device='cuda:3'), in_proj_covar=tensor([0.0092, 0.0068, 0.0073, 0.0091, 0.0075, 0.0103, 0.0086, 0.0085], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 11:43:08,130 INFO [train.py:898] (3/4) Epoch 19, batch 1600, loss[loss=0.1463, simple_loss=0.2281, pruned_loss=0.03223, over 18398.00 frames. ], tot_loss[loss=0.1662, simple_loss=0.2548, pruned_loss=0.03876, over 3585709.04 frames. ], batch size: 42, lr: 5.96e-03, grad_scale: 8.0 2023-03-09 11:43:14,647 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=67018.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 11:44:06,915 INFO [train.py:898] (3/4) Epoch 19, batch 1650, loss[loss=0.1635, simple_loss=0.2603, pruned_loss=0.03334, over 18363.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2555, pruned_loss=0.03891, over 3591892.89 frames. ], batch size: 55, lr: 5.96e-03, grad_scale: 8.0 2023-03-09 11:44:08,880 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-09 11:44:16,364 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.888e+02 2.665e+02 3.285e+02 3.842e+02 7.636e+02, threshold=6.571e+02, percent-clipped=3.0 2023-03-09 11:44:31,269 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=67083.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:44:36,664 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=67087.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 11:45:05,337 INFO [train.py:898] (3/4) Epoch 19, batch 1700, loss[loss=0.153, simple_loss=0.2326, pruned_loss=0.03672, over 18483.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2563, pruned_loss=0.03939, over 3586511.33 frames. ], batch size: 47, lr: 5.95e-03, grad_scale: 8.0 2023-03-09 11:45:35,943 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5865, 2.1434, 2.4910, 2.5916, 3.0824, 4.5458, 4.3793, 3.3680], device='cuda:3'), covar=tensor([0.1748, 0.2513, 0.2950, 0.1862, 0.2372, 0.0270, 0.0422, 0.0857], device='cuda:3'), in_proj_covar=tensor([0.0292, 0.0340, 0.0372, 0.0272, 0.0388, 0.0233, 0.0292, 0.0247], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 11:45:43,208 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=67144.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:46:04,411 INFO [train.py:898] (3/4) Epoch 19, batch 1750, loss[loss=0.1628, simple_loss=0.259, pruned_loss=0.03332, over 18384.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2573, pruned_loss=0.03945, over 3580204.23 frames. ], batch size: 50, lr: 5.95e-03, grad_scale: 8.0 2023-03-09 11:46:13,341 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.723e+02 2.728e+02 3.267e+02 4.070e+02 1.061e+03, threshold=6.534e+02, percent-clipped=5.0 2023-03-09 11:46:21,170 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=67177.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 11:46:26,272 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-09 11:46:44,251 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5701, 2.8965, 2.6395, 2.9243, 3.6977, 3.5975, 3.1892, 2.9474], device='cuda:3'), covar=tensor([0.0165, 0.0264, 0.0516, 0.0398, 0.0156, 0.0133, 0.0372, 0.0345], device='cuda:3'), in_proj_covar=tensor([0.0136, 0.0132, 0.0162, 0.0154, 0.0127, 0.0111, 0.0150, 0.0151], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 11:47:02,903 INFO [train.py:898] (3/4) Epoch 19, batch 1800, loss[loss=0.1738, simple_loss=0.259, pruned_loss=0.04428, over 18531.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.258, pruned_loss=0.03983, over 3580557.35 frames. ], batch size: 49, lr: 5.95e-03, grad_scale: 8.0 2023-03-09 11:47:17,350 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=67225.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 11:48:01,529 INFO [train.py:898] (3/4) Epoch 19, batch 1850, loss[loss=0.196, simple_loss=0.2913, pruned_loss=0.05031, over 17687.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2576, pruned_loss=0.03984, over 3579291.91 frames. ], batch size: 70, lr: 5.95e-03, grad_scale: 8.0 2023-03-09 11:48:10,591 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.962e+02 2.832e+02 3.331e+02 4.104e+02 8.145e+02, threshold=6.662e+02, percent-clipped=3.0 2023-03-09 11:48:20,719 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6066, 4.2554, 4.1916, 3.2993, 3.5455, 3.1646, 2.5659, 2.3550], device='cuda:3'), covar=tensor([0.0245, 0.0136, 0.0103, 0.0295, 0.0382, 0.0286, 0.0717, 0.0894], device='cuda:3'), in_proj_covar=tensor([0.0068, 0.0057, 0.0060, 0.0067, 0.0087, 0.0066, 0.0075, 0.0083], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 11:48:37,669 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8136, 3.8221, 3.6610, 3.2984, 3.4742, 2.9843, 2.9452, 3.6667], device='cuda:3'), covar=tensor([0.0056, 0.0071, 0.0068, 0.0140, 0.0100, 0.0175, 0.0196, 0.0077], device='cuda:3'), in_proj_covar=tensor([0.0130, 0.0155, 0.0129, 0.0182, 0.0136, 0.0176, 0.0178, 0.0115], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 11:49:00,070 INFO [train.py:898] (3/4) Epoch 19, batch 1900, loss[loss=0.1639, simple_loss=0.243, pruned_loss=0.04239, over 18270.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2572, pruned_loss=0.03978, over 3570040.65 frames. ], batch size: 45, lr: 5.95e-03, grad_scale: 4.0 2023-03-09 11:49:00,336 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=67313.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 11:49:20,565 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.81 vs. limit=2.0 2023-03-09 11:49:26,915 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5040, 4.9250, 4.8688, 4.8829, 4.4723, 4.8055, 4.2789, 4.7815], device='cuda:3'), covar=tensor([0.0273, 0.0315, 0.0248, 0.0549, 0.0421, 0.0255, 0.1096, 0.0376], device='cuda:3'), in_proj_covar=tensor([0.0210, 0.0257, 0.0247, 0.0320, 0.0263, 0.0263, 0.0304, 0.0251], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 11:49:58,273 INFO [train.py:898] (3/4) Epoch 19, batch 1950, loss[loss=0.1554, simple_loss=0.2547, pruned_loss=0.0281, over 18344.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2568, pruned_loss=0.03944, over 3579732.04 frames. ], batch size: 55, lr: 5.94e-03, grad_scale: 4.0 2023-03-09 11:50:08,471 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.842e+02 2.816e+02 3.321e+02 4.112e+02 1.785e+03, threshold=6.643e+02, percent-clipped=3.0 2023-03-09 11:50:22,666 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-09 11:50:26,386 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=67387.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 11:50:31,480 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4049, 5.2168, 5.6266, 5.6666, 5.3068, 6.2039, 5.8756, 5.3775], device='cuda:3'), covar=tensor([0.1120, 0.0667, 0.0710, 0.0724, 0.1568, 0.0738, 0.0638, 0.1785], device='cuda:3'), in_proj_covar=tensor([0.0352, 0.0277, 0.0299, 0.0298, 0.0325, 0.0413, 0.0273, 0.0403], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0002, 0.0004], device='cuda:3') 2023-03-09 11:50:33,987 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7946, 5.2695, 5.2729, 5.3561, 4.7389, 5.1557, 4.2020, 5.1522], device='cuda:3'), covar=tensor([0.0283, 0.0427, 0.0304, 0.0408, 0.0435, 0.0297, 0.1830, 0.0347], device='cuda:3'), in_proj_covar=tensor([0.0211, 0.0256, 0.0247, 0.0319, 0.0264, 0.0263, 0.0304, 0.0251], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 11:50:48,653 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5766, 6.0923, 5.6168, 5.8800, 5.7306, 5.5377, 6.1881, 6.0882], device='cuda:3'), covar=tensor([0.1310, 0.0810, 0.0421, 0.0803, 0.1415, 0.0741, 0.0575, 0.0759], device='cuda:3'), in_proj_covar=tensor([0.0610, 0.0522, 0.0376, 0.0548, 0.0735, 0.0541, 0.0736, 0.0558], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 11:50:57,635 INFO [train.py:898] (3/4) Epoch 19, batch 2000, loss[loss=0.1488, simple_loss=0.2384, pruned_loss=0.02959, over 18493.00 frames. ], tot_loss[loss=0.1675, simple_loss=0.2565, pruned_loss=0.03924, over 3595426.85 frames. ], batch size: 47, lr: 5.94e-03, grad_scale: 8.0 2023-03-09 11:51:23,508 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=67435.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 11:51:27,983 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=67439.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:51:56,182 INFO [train.py:898] (3/4) Epoch 19, batch 2050, loss[loss=0.1662, simple_loss=0.2653, pruned_loss=0.03356, over 16864.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2572, pruned_loss=0.03971, over 3586282.72 frames. ], batch size: 78, lr: 5.94e-03, grad_scale: 8.0 2023-03-09 11:52:06,258 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.056e+02 2.683e+02 3.184e+02 3.899e+02 7.354e+02, threshold=6.369e+02, percent-clipped=1.0 2023-03-09 11:52:54,239 INFO [train.py:898] (3/4) Epoch 19, batch 2100, loss[loss=0.1531, simple_loss=0.2391, pruned_loss=0.03351, over 18370.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2574, pruned_loss=0.03953, over 3590304.29 frames. ], batch size: 46, lr: 5.94e-03, grad_scale: 8.0 2023-03-09 11:52:54,544 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=67513.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:53:38,590 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4750, 3.4002, 2.2058, 4.3633, 3.0783, 4.1894, 2.4889, 3.7971], device='cuda:3'), covar=tensor([0.0667, 0.0795, 0.1394, 0.0441, 0.0825, 0.0300, 0.1165, 0.0392], device='cuda:3'), in_proj_covar=tensor([0.0208, 0.0223, 0.0188, 0.0277, 0.0190, 0.0259, 0.0201, 0.0196], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 11:53:52,403 INFO [train.py:898] (3/4) Epoch 19, batch 2150, loss[loss=0.1574, simple_loss=0.2457, pruned_loss=0.03458, over 18283.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2575, pruned_loss=0.03957, over 3592842.18 frames. ], batch size: 47, lr: 5.93e-03, grad_scale: 8.0 2023-03-09 11:54:03,261 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.859e+02 2.589e+02 3.256e+02 4.119e+02 1.040e+03, threshold=6.512e+02, percent-clipped=4.0 2023-03-09 11:54:06,012 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=67574.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 11:54:47,599 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4575, 3.3591, 3.2651, 2.9300, 3.1811, 2.5994, 2.6079, 3.3976], device='cuda:3'), covar=tensor([0.0063, 0.0100, 0.0087, 0.0134, 0.0094, 0.0198, 0.0212, 0.0076], device='cuda:3'), in_proj_covar=tensor([0.0131, 0.0154, 0.0130, 0.0184, 0.0137, 0.0175, 0.0178, 0.0115], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 11:54:51,798 INFO [train.py:898] (3/4) Epoch 19, batch 2200, loss[loss=0.1433, simple_loss=0.2308, pruned_loss=0.0279, over 18351.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2574, pruned_loss=0.03945, over 3599580.93 frames. ], batch size: 46, lr: 5.93e-03, grad_scale: 8.0 2023-03-09 11:54:52,075 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=67613.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:54:57,906 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.86 vs. limit=2.0 2023-03-09 11:55:48,946 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=67661.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:55:49,140 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0714, 5.4802, 2.7850, 5.3075, 5.2125, 5.4862, 5.2537, 2.9100], device='cuda:3'), covar=tensor([0.0191, 0.0054, 0.0791, 0.0069, 0.0058, 0.0059, 0.0082, 0.0947], device='cuda:3'), in_proj_covar=tensor([0.0087, 0.0079, 0.0095, 0.0093, 0.0083, 0.0074, 0.0083, 0.0096], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 11:55:51,007 INFO [train.py:898] (3/4) Epoch 19, batch 2250, loss[loss=0.1734, simple_loss=0.2639, pruned_loss=0.04146, over 18489.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2572, pruned_loss=0.03924, over 3593359.72 frames. ], batch size: 51, lr: 5.93e-03, grad_scale: 8.0 2023-03-09 11:56:01,714 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.067e+02 2.672e+02 3.118e+02 3.558e+02 7.247e+02, threshold=6.237e+02, percent-clipped=1.0 2023-03-09 11:56:11,234 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=67680.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:56:25,076 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=67692.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:56:27,412 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3163, 2.7677, 2.4344, 2.7403, 3.4937, 3.3640, 2.9755, 2.7487], device='cuda:3'), covar=tensor([0.0198, 0.0269, 0.0580, 0.0437, 0.0217, 0.0152, 0.0412, 0.0425], device='cuda:3'), in_proj_covar=tensor([0.0137, 0.0131, 0.0163, 0.0155, 0.0128, 0.0113, 0.0152, 0.0152], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 11:56:50,295 INFO [train.py:898] (3/4) Epoch 19, batch 2300, loss[loss=0.1646, simple_loss=0.2477, pruned_loss=0.04078, over 17691.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2574, pruned_loss=0.0394, over 3586727.88 frames. ], batch size: 39, lr: 5.93e-03, grad_scale: 8.0 2023-03-09 11:57:19,217 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.77 vs. limit=2.0 2023-03-09 11:57:20,966 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=67739.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:57:23,505 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=67741.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:57:37,071 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=67753.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 11:57:48,542 INFO [train.py:898] (3/4) Epoch 19, batch 2350, loss[loss=0.2059, simple_loss=0.2977, pruned_loss=0.05708, over 18349.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2573, pruned_loss=0.03951, over 3587741.23 frames. ], batch size: 55, lr: 5.93e-03, grad_scale: 8.0 2023-03-09 11:57:59,072 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.915e+02 2.613e+02 3.195e+02 3.840e+02 8.434e+02, threshold=6.389e+02, percent-clipped=1.0 2023-03-09 11:58:16,652 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=67787.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 11:58:34,623 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5935, 6.0629, 5.6156, 5.8883, 5.6538, 5.4879, 6.1459, 6.0600], device='cuda:3'), covar=tensor([0.1181, 0.0726, 0.0443, 0.0685, 0.1400, 0.0736, 0.0566, 0.0689], device='cuda:3'), in_proj_covar=tensor([0.0609, 0.0522, 0.0374, 0.0549, 0.0739, 0.0540, 0.0738, 0.0560], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 11:58:47,098 INFO [train.py:898] (3/4) Epoch 19, batch 2400, loss[loss=0.1721, simple_loss=0.2633, pruned_loss=0.04046, over 18495.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2575, pruned_loss=0.03963, over 3582566.43 frames. ], batch size: 47, lr: 5.92e-03, grad_scale: 8.0 2023-03-09 11:59:40,545 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8068, 3.7510, 5.1152, 2.9965, 4.4136, 2.6868, 3.1443, 1.7904], device='cuda:3'), covar=tensor([0.1119, 0.0832, 0.0149, 0.0843, 0.0465, 0.2316, 0.2390, 0.2098], device='cuda:3'), in_proj_covar=tensor([0.0217, 0.0240, 0.0177, 0.0192, 0.0254, 0.0268, 0.0318, 0.0230], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 11:59:45,720 INFO [train.py:898] (3/4) Epoch 19, batch 2450, loss[loss=0.1709, simple_loss=0.2662, pruned_loss=0.03779, over 18569.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2576, pruned_loss=0.03947, over 3592684.30 frames. ], batch size: 54, lr: 5.92e-03, grad_scale: 8.0 2023-03-09 11:59:54,024 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=67869.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 11:59:57,184 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.025e+02 2.777e+02 3.368e+02 4.107e+02 6.941e+02, threshold=6.736e+02, percent-clipped=1.0 2023-03-09 12:00:42,869 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9409, 4.9130, 4.9661, 4.7440, 4.6858, 4.7910, 5.0794, 5.1014], device='cuda:3'), covar=tensor([0.0071, 0.0076, 0.0063, 0.0107, 0.0069, 0.0152, 0.0079, 0.0117], device='cuda:3'), in_proj_covar=tensor([0.0090, 0.0067, 0.0071, 0.0089, 0.0073, 0.0099, 0.0084, 0.0084], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 12:00:44,895 INFO [train.py:898] (3/4) Epoch 19, batch 2500, loss[loss=0.1649, simple_loss=0.2573, pruned_loss=0.03626, over 18412.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2576, pruned_loss=0.03939, over 3579428.01 frames. ], batch size: 48, lr: 5.92e-03, grad_scale: 8.0 2023-03-09 12:01:38,583 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3131, 2.8099, 2.4330, 2.7389, 3.4644, 3.3238, 2.9545, 2.6648], device='cuda:3'), covar=tensor([0.0186, 0.0228, 0.0577, 0.0386, 0.0170, 0.0153, 0.0340, 0.0411], device='cuda:3'), in_proj_covar=tensor([0.0136, 0.0129, 0.0162, 0.0153, 0.0127, 0.0112, 0.0149, 0.0152], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 12:01:43,839 INFO [train.py:898] (3/4) Epoch 19, batch 2550, loss[loss=0.161, simple_loss=0.2513, pruned_loss=0.03534, over 18494.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2571, pruned_loss=0.03909, over 3584876.10 frames. ], batch size: 47, lr: 5.92e-03, grad_scale: 4.0 2023-03-09 12:01:46,402 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4255, 3.8976, 5.0832, 4.1958, 2.7958, 2.7641, 4.1286, 5.2893], device='cuda:3'), covar=tensor([0.0900, 0.1306, 0.0151, 0.0455, 0.1086, 0.1178, 0.0473, 0.0154], device='cuda:3'), in_proj_covar=tensor([0.0146, 0.0270, 0.0144, 0.0179, 0.0190, 0.0186, 0.0190, 0.0188], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 12:01:56,274 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.852e+02 2.690e+02 3.137e+02 3.674e+02 8.020e+02, threshold=6.273e+02, percent-clipped=2.0 2023-03-09 12:02:06,074 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=67981.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:02:48,133 INFO [train.py:898] (3/4) Epoch 19, batch 2600, loss[loss=0.1503, simple_loss=0.2279, pruned_loss=0.03632, over 17745.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2573, pruned_loss=0.03895, over 3585489.53 frames. ], batch size: 39, lr: 5.91e-03, grad_scale: 4.0 2023-03-09 12:02:55,488 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.40 vs. limit=5.0 2023-03-09 12:03:13,574 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6470, 3.9075, 2.3616, 3.7094, 4.8031, 2.4965, 3.6654, 3.7331], device='cuda:3'), covar=tensor([0.0199, 0.1010, 0.1582, 0.0670, 0.0090, 0.1264, 0.0675, 0.0733], device='cuda:3'), in_proj_covar=tensor([0.0157, 0.0263, 0.0201, 0.0193, 0.0120, 0.0180, 0.0214, 0.0224], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 12:03:16,206 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=68036.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:03:23,045 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=68042.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:03:29,712 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=68048.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 12:03:47,004 INFO [train.py:898] (3/4) Epoch 19, batch 2650, loss[loss=0.2137, simple_loss=0.2886, pruned_loss=0.06941, over 12782.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2571, pruned_loss=0.03917, over 3578070.66 frames. ], batch size: 129, lr: 5.91e-03, grad_scale: 4.0 2023-03-09 12:03:56,678 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.19 vs. limit=5.0 2023-03-09 12:03:58,872 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.938e+02 2.796e+02 3.306e+02 3.948e+02 6.802e+02, threshold=6.612e+02, percent-clipped=2.0 2023-03-09 12:04:16,012 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=68087.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 12:04:23,667 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8112, 4.9607, 4.9689, 5.0112, 4.7479, 5.5526, 5.1751, 4.8109], device='cuda:3'), covar=tensor([0.1150, 0.0807, 0.0818, 0.0972, 0.1480, 0.0813, 0.0675, 0.1851], device='cuda:3'), in_proj_covar=tensor([0.0349, 0.0275, 0.0301, 0.0299, 0.0325, 0.0412, 0.0272, 0.0402], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0002, 0.0004], device='cuda:3') 2023-03-09 12:04:27,356 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0514, 4.1532, 2.4635, 4.0699, 5.2990, 2.5993, 3.9293, 4.0760], device='cuda:3'), covar=tensor([0.0169, 0.1179, 0.1538, 0.0615, 0.0074, 0.1194, 0.0632, 0.0688], device='cuda:3'), in_proj_covar=tensor([0.0159, 0.0266, 0.0202, 0.0195, 0.0121, 0.0181, 0.0215, 0.0226], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 12:04:28,692 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-09 12:04:44,746 INFO [train.py:898] (3/4) Epoch 19, batch 2700, loss[loss=0.1546, simple_loss=0.2422, pruned_loss=0.03349, over 18378.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2564, pruned_loss=0.03888, over 3580680.37 frames. ], batch size: 50, lr: 5.91e-03, grad_scale: 4.0 2023-03-09 12:05:27,222 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=68148.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 12:05:39,843 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6002, 3.4986, 2.1430, 4.4628, 3.1164, 4.3605, 2.5016, 3.9440], device='cuda:3'), covar=tensor([0.0644, 0.0809, 0.1573, 0.0460, 0.0865, 0.0337, 0.1154, 0.0423], device='cuda:3'), in_proj_covar=tensor([0.0210, 0.0225, 0.0188, 0.0278, 0.0193, 0.0259, 0.0202, 0.0197], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 12:05:43,839 INFO [train.py:898] (3/4) Epoch 19, batch 2750, loss[loss=0.1554, simple_loss=0.2467, pruned_loss=0.032, over 18479.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.2565, pruned_loss=0.03895, over 3584138.06 frames. ], batch size: 51, lr: 5.91e-03, grad_scale: 4.0 2023-03-09 12:05:51,505 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=68169.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:05:55,856 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.532e+02 2.565e+02 3.131e+02 3.754e+02 7.044e+02, threshold=6.262e+02, percent-clipped=1.0 2023-03-09 12:06:42,968 INFO [train.py:898] (3/4) Epoch 19, batch 2800, loss[loss=0.1687, simple_loss=0.2631, pruned_loss=0.03716, over 18327.00 frames. ], tot_loss[loss=0.1675, simple_loss=0.2569, pruned_loss=0.03901, over 3584337.18 frames. ], batch size: 54, lr: 5.91e-03, grad_scale: 8.0 2023-03-09 12:06:47,657 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=68217.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:07:41,660 INFO [train.py:898] (3/4) Epoch 19, batch 2850, loss[loss=0.1387, simple_loss=0.2192, pruned_loss=0.02912, over 17606.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2565, pruned_loss=0.0388, over 3589393.21 frames. ], batch size: 39, lr: 5.90e-03, grad_scale: 8.0 2023-03-09 12:07:53,622 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.861e+02 2.563e+02 3.164e+02 3.682e+02 9.840e+02, threshold=6.328e+02, percent-clipped=2.0 2023-03-09 12:07:54,128 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8727, 3.8540, 5.1702, 4.5830, 3.4511, 3.1396, 4.5726, 5.3543], device='cuda:3'), covar=tensor([0.0763, 0.1499, 0.0160, 0.0340, 0.0850, 0.1076, 0.0341, 0.0378], device='cuda:3'), in_proj_covar=tensor([0.0147, 0.0271, 0.0145, 0.0179, 0.0191, 0.0188, 0.0191, 0.0191], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 12:08:15,316 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-09 12:08:41,533 INFO [train.py:898] (3/4) Epoch 19, batch 2900, loss[loss=0.1599, simple_loss=0.2349, pruned_loss=0.0424, over 18474.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2565, pruned_loss=0.03881, over 3598725.76 frames. ], batch size: 44, lr: 5.90e-03, grad_scale: 8.0 2023-03-09 12:09:09,246 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=68336.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:09:10,301 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=68337.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:09:23,412 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=68348.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:09:40,279 INFO [train.py:898] (3/4) Epoch 19, batch 2950, loss[loss=0.1803, simple_loss=0.2758, pruned_loss=0.04243, over 17720.00 frames. ], tot_loss[loss=0.167, simple_loss=0.2562, pruned_loss=0.03889, over 3581895.69 frames. ], batch size: 70, lr: 5.90e-03, grad_scale: 8.0 2023-03-09 12:09:51,486 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.776e+02 2.615e+02 3.161e+02 3.773e+02 8.205e+02, threshold=6.323e+02, percent-clipped=2.0 2023-03-09 12:10:04,980 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=68384.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:10:19,665 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=68396.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:10:23,162 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=68399.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:10:39,479 INFO [train.py:898] (3/4) Epoch 19, batch 3000, loss[loss=0.1758, simple_loss=0.2569, pruned_loss=0.04737, over 18379.00 frames. ], tot_loss[loss=0.1665, simple_loss=0.2556, pruned_loss=0.03875, over 3591038.42 frames. ], batch size: 46, lr: 5.90e-03, grad_scale: 8.0 2023-03-09 12:10:39,479 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 12:10:51,587 INFO [train.py:932] (3/4) Epoch 19, validation: loss=0.1511, simple_loss=0.2509, pruned_loss=0.02564, over 944034.00 frames. 2023-03-09 12:10:51,588 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 12:10:57,880 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-09 12:11:24,960 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1089, 5.0869, 4.7097, 5.0327, 5.0184, 4.4945, 4.9252, 4.7586], device='cuda:3'), covar=tensor([0.0431, 0.0447, 0.1466, 0.0737, 0.0586, 0.0421, 0.0481, 0.0935], device='cuda:3'), in_proj_covar=tensor([0.0467, 0.0539, 0.0677, 0.0419, 0.0434, 0.0485, 0.0533, 0.0659], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 12:11:27,653 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=68443.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 12:11:47,340 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=68460.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:11:48,950 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.32 vs. limit=5.0 2023-03-09 12:11:50,397 INFO [train.py:898] (3/4) Epoch 19, batch 3050, loss[loss=0.1877, simple_loss=0.2727, pruned_loss=0.05133, over 18494.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2556, pruned_loss=0.03882, over 3594866.45 frames. ], batch size: 59, lr: 5.90e-03, grad_scale: 8.0 2023-03-09 12:12:02,237 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.030e+02 2.743e+02 3.169e+02 3.811e+02 7.810e+02, threshold=6.338e+02, percent-clipped=1.0 2023-03-09 12:12:49,527 INFO [train.py:898] (3/4) Epoch 19, batch 3100, loss[loss=0.1778, simple_loss=0.2664, pruned_loss=0.0446, over 18267.00 frames. ], tot_loss[loss=0.1669, simple_loss=0.2562, pruned_loss=0.03881, over 3597304.03 frames. ], batch size: 49, lr: 5.89e-03, grad_scale: 8.0 2023-03-09 12:13:25,384 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5065, 2.8303, 2.4639, 2.8689, 3.6300, 3.4024, 3.0484, 2.9134], device='cuda:3'), covar=tensor([0.0196, 0.0312, 0.0552, 0.0351, 0.0181, 0.0179, 0.0367, 0.0344], device='cuda:3'), in_proj_covar=tensor([0.0135, 0.0129, 0.0159, 0.0152, 0.0126, 0.0111, 0.0149, 0.0149], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 12:13:48,221 INFO [train.py:898] (3/4) Epoch 19, batch 3150, loss[loss=0.1562, simple_loss=0.2364, pruned_loss=0.03795, over 18479.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2564, pruned_loss=0.03889, over 3599790.39 frames. ], batch size: 43, lr: 5.89e-03, grad_scale: 8.0 2023-03-09 12:13:59,951 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.941e+02 2.741e+02 3.192e+02 3.844e+02 7.803e+02, threshold=6.385e+02, percent-clipped=3.0 2023-03-09 12:14:05,925 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=68578.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:14:12,817 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6267, 2.9328, 2.4972, 2.9739, 3.6926, 3.5235, 3.1214, 3.0048], device='cuda:3'), covar=tensor([0.0171, 0.0269, 0.0602, 0.0333, 0.0169, 0.0154, 0.0373, 0.0331], device='cuda:3'), in_proj_covar=tensor([0.0136, 0.0129, 0.0160, 0.0152, 0.0126, 0.0112, 0.0149, 0.0150], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 12:14:14,413 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.57 vs. limit=5.0 2023-03-09 12:14:46,959 INFO [train.py:898] (3/4) Epoch 19, batch 3200, loss[loss=0.1633, simple_loss=0.2381, pruned_loss=0.04427, over 18455.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.2565, pruned_loss=0.03897, over 3605478.55 frames. ], batch size: 43, lr: 5.89e-03, grad_scale: 8.0 2023-03-09 12:14:56,973 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-09 12:15:16,394 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=68637.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:15:16,901 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.77 vs. limit=2.0 2023-03-09 12:15:18,751 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=68639.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:15:46,039 INFO [train.py:898] (3/4) Epoch 19, batch 3250, loss[loss=0.1671, simple_loss=0.2665, pruned_loss=0.03379, over 18372.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2562, pruned_loss=0.03894, over 3610137.45 frames. ], batch size: 55, lr: 5.89e-03, grad_scale: 8.0 2023-03-09 12:15:57,316 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.558e+02 2.445e+02 3.074e+02 3.780e+02 1.190e+03, threshold=6.148e+02, percent-clipped=4.0 2023-03-09 12:16:10,685 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4775, 3.3985, 2.1195, 4.3541, 2.8760, 4.2098, 2.4245, 3.8681], device='cuda:3'), covar=tensor([0.0692, 0.0807, 0.1475, 0.0504, 0.1017, 0.0331, 0.1168, 0.0443], device='cuda:3'), in_proj_covar=tensor([0.0210, 0.0222, 0.0186, 0.0279, 0.0190, 0.0259, 0.0200, 0.0198], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 12:16:12,190 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=68685.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:16:12,378 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=68685.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:16:45,053 INFO [train.py:898] (3/4) Epoch 19, batch 3300, loss[loss=0.162, simple_loss=0.2601, pruned_loss=0.03196, over 18411.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2563, pruned_loss=0.03892, over 3593338.09 frames. ], batch size: 52, lr: 5.88e-03, grad_scale: 8.0 2023-03-09 12:17:20,805 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=68743.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 12:17:24,265 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=68746.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:17:34,144 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=68755.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:17:43,483 INFO [train.py:898] (3/4) Epoch 19, batch 3350, loss[loss=0.192, simple_loss=0.2712, pruned_loss=0.05645, over 12781.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2562, pruned_loss=0.03897, over 3588136.26 frames. ], batch size: 131, lr: 5.88e-03, grad_scale: 8.0 2023-03-09 12:17:54,604 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.843e+02 2.741e+02 3.273e+02 4.091e+02 6.319e+02, threshold=6.545e+02, percent-clipped=1.0 2023-03-09 12:18:16,628 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=68791.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 12:18:42,929 INFO [train.py:898] (3/4) Epoch 19, batch 3400, loss[loss=0.1577, simple_loss=0.2441, pruned_loss=0.03561, over 18271.00 frames. ], tot_loss[loss=0.1675, simple_loss=0.2566, pruned_loss=0.0392, over 3588856.23 frames. ], batch size: 49, lr: 5.88e-03, grad_scale: 8.0 2023-03-09 12:19:35,540 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6216, 2.4768, 2.4242, 2.5792, 2.9847, 3.8089, 3.7135, 3.1638], device='cuda:3'), covar=tensor([0.1554, 0.2046, 0.2663, 0.1762, 0.2078, 0.0401, 0.0543, 0.0724], device='cuda:3'), in_proj_covar=tensor([0.0295, 0.0343, 0.0374, 0.0274, 0.0389, 0.0237, 0.0293, 0.0251], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:3') 2023-03-09 12:19:41,760 INFO [train.py:898] (3/4) Epoch 19, batch 3450, loss[loss=0.16, simple_loss=0.2645, pruned_loss=0.02773, over 18307.00 frames. ], tot_loss[loss=0.1665, simple_loss=0.2557, pruned_loss=0.03864, over 3597024.23 frames. ], batch size: 54, lr: 5.88e-03, grad_scale: 8.0 2023-03-09 12:19:52,956 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1630, 3.4211, 3.3535, 2.9357, 3.0594, 2.9070, 2.4974, 2.2729], device='cuda:3'), covar=tensor([0.0216, 0.0150, 0.0117, 0.0257, 0.0313, 0.0242, 0.0563, 0.0692], device='cuda:3'), in_proj_covar=tensor([0.0068, 0.0057, 0.0060, 0.0067, 0.0086, 0.0065, 0.0075, 0.0081], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 12:19:53,570 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.993e+02 2.660e+02 3.004e+02 3.762e+02 7.210e+02, threshold=6.009e+02, percent-clipped=0.0 2023-03-09 12:20:40,018 INFO [train.py:898] (3/4) Epoch 19, batch 3500, loss[loss=0.1613, simple_loss=0.2511, pruned_loss=0.03571, over 18369.00 frames. ], tot_loss[loss=0.166, simple_loss=0.2552, pruned_loss=0.03846, over 3585674.03 frames. ], batch size: 50, lr: 5.88e-03, grad_scale: 8.0 2023-03-09 12:20:41,718 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9771, 4.1703, 2.3107, 4.1891, 5.2413, 2.6535, 3.8161, 3.8979], device='cuda:3'), covar=tensor([0.0175, 0.1068, 0.1661, 0.0516, 0.0060, 0.1089, 0.0640, 0.0712], device='cuda:3'), in_proj_covar=tensor([0.0159, 0.0264, 0.0201, 0.0193, 0.0120, 0.0179, 0.0214, 0.0223], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 12:21:04,422 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=68934.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:21:35,934 INFO [train.py:898] (3/4) Epoch 19, batch 3550, loss[loss=0.1683, simple_loss=0.2643, pruned_loss=0.03614, over 17954.00 frames. ], tot_loss[loss=0.1657, simple_loss=0.2546, pruned_loss=0.03841, over 3581324.57 frames. ], batch size: 65, lr: 5.87e-03, grad_scale: 8.0 2023-03-09 12:21:46,795 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.911e+02 2.828e+02 3.238e+02 3.903e+02 1.381e+03, threshold=6.476e+02, percent-clipped=5.0 2023-03-09 12:21:47,510 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.17 vs. limit=5.0 2023-03-09 12:22:30,469 INFO [train.py:898] (3/4) Epoch 19, batch 3600, loss[loss=0.188, simple_loss=0.2773, pruned_loss=0.04931, over 18559.00 frames. ], tot_loss[loss=0.1656, simple_loss=0.2543, pruned_loss=0.03842, over 3590815.58 frames. ], batch size: 54, lr: 5.87e-03, grad_scale: 8.0 2023-03-09 12:22:59,608 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69041.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:23:01,631 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69043.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:23:34,702 INFO [train.py:898] (3/4) Epoch 20, batch 0, loss[loss=0.1651, simple_loss=0.2403, pruned_loss=0.04489, over 18383.00 frames. ], tot_loss[loss=0.1651, simple_loss=0.2403, pruned_loss=0.04489, over 18383.00 frames. ], batch size: 42, lr: 5.72e-03, grad_scale: 8.0 2023-03-09 12:23:34,703 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 12:23:39,729 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5730, 3.2069, 4.4750, 4.0390, 3.2263, 2.8534, 4.0217, 4.6633], device='cuda:3'), covar=tensor([0.0832, 0.1503, 0.0188, 0.0406, 0.0907, 0.1192, 0.0413, 0.0188], device='cuda:3'), in_proj_covar=tensor([0.0145, 0.0268, 0.0145, 0.0178, 0.0189, 0.0185, 0.0189, 0.0189], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 12:23:46,324 INFO [train.py:932] (3/4) Epoch 20, validation: loss=0.1509, simple_loss=0.2512, pruned_loss=0.02534, over 944034.00 frames. 2023-03-09 12:23:46,324 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 12:23:56,118 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69055.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:24:17,100 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.770e+02 2.674e+02 3.193e+02 4.211e+02 7.931e+02, threshold=6.386e+02, percent-clipped=3.0 2023-03-09 12:24:18,837 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-09 12:24:45,724 INFO [train.py:898] (3/4) Epoch 20, batch 50, loss[loss=0.15, simple_loss=0.243, pruned_loss=0.02856, over 18283.00 frames. ], tot_loss[loss=0.1665, simple_loss=0.2554, pruned_loss=0.03883, over 810851.36 frames. ], batch size: 49, lr: 5.72e-03, grad_scale: 4.0 2023-03-09 12:24:52,745 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=69103.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:24:54,028 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69104.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:25:44,561 INFO [train.py:898] (3/4) Epoch 20, batch 100, loss[loss=0.197, simple_loss=0.2746, pruned_loss=0.05967, over 12285.00 frames. ], tot_loss[loss=0.1662, simple_loss=0.2546, pruned_loss=0.03889, over 1428776.81 frames. ], batch size: 129, lr: 5.72e-03, grad_scale: 4.0 2023-03-09 12:26:02,427 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69162.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:26:09,334 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69168.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:26:15,828 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.993e+02 2.622e+02 3.145e+02 3.671e+02 8.108e+02, threshold=6.290e+02, percent-clipped=1.0 2023-03-09 12:26:42,965 INFO [train.py:898] (3/4) Epoch 20, batch 150, loss[loss=0.166, simple_loss=0.2622, pruned_loss=0.03487, over 18640.00 frames. ], tot_loss[loss=0.1667, simple_loss=0.2561, pruned_loss=0.03869, over 1912876.81 frames. ], batch size: 52, lr: 5.71e-03, grad_scale: 4.0 2023-03-09 12:27:11,778 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7166, 2.9879, 4.2976, 3.8499, 2.6791, 4.6393, 3.9683, 2.9111], device='cuda:3'), covar=tensor([0.0447, 0.1393, 0.0288, 0.0342, 0.1548, 0.0170, 0.0503, 0.1003], device='cuda:3'), in_proj_covar=tensor([0.0209, 0.0237, 0.0201, 0.0157, 0.0221, 0.0207, 0.0240, 0.0196], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 12:27:14,130 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69223.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:27:16,568 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69225.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:27:21,171 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69229.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:27:27,328 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69234.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:27:40,483 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69245.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 12:27:42,423 INFO [train.py:898] (3/4) Epoch 20, batch 200, loss[loss=0.1517, simple_loss=0.2335, pruned_loss=0.03499, over 18490.00 frames. ], tot_loss[loss=0.1657, simple_loss=0.2552, pruned_loss=0.03811, over 2292378.09 frames. ], batch size: 44, lr: 5.71e-03, grad_scale: 4.0 2023-03-09 12:28:13,705 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69273.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:28:14,468 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.067e+02 2.703e+02 3.198e+02 3.752e+02 7.537e+02, threshold=6.397e+02, percent-clipped=4.0 2023-03-09 12:28:23,476 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=69282.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:28:28,217 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69286.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:28:41,480 INFO [train.py:898] (3/4) Epoch 20, batch 250, loss[loss=0.171, simple_loss=0.2629, pruned_loss=0.03948, over 18038.00 frames. ], tot_loss[loss=0.1659, simple_loss=0.2548, pruned_loss=0.03845, over 2565385.13 frames. ], batch size: 65, lr: 5.71e-03, grad_scale: 4.0 2023-03-09 12:28:41,778 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.6037, 5.5578, 5.2316, 5.4738, 5.5047, 4.8906, 5.4271, 5.1450], device='cuda:3'), covar=tensor([0.0356, 0.0350, 0.1069, 0.0780, 0.0494, 0.0373, 0.0342, 0.0952], device='cuda:3'), in_proj_covar=tensor([0.0465, 0.0532, 0.0675, 0.0419, 0.0434, 0.0484, 0.0526, 0.0652], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 12:28:52,733 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69306.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 12:29:08,274 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69320.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:29:24,519 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69334.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 12:29:32,198 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69341.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:29:39,806 INFO [train.py:898] (3/4) Epoch 20, batch 300, loss[loss=0.1811, simple_loss=0.2668, pruned_loss=0.04775, over 17926.00 frames. ], tot_loss[loss=0.1657, simple_loss=0.2547, pruned_loss=0.03835, over 2791776.29 frames. ], batch size: 65, lr: 5.71e-03, grad_scale: 4.0 2023-03-09 12:29:58,528 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1400, 5.1246, 5.3708, 5.4370, 5.1277, 5.9337, 5.6315, 5.1888], device='cuda:3'), covar=tensor([0.1141, 0.0725, 0.0762, 0.0731, 0.1564, 0.0776, 0.0605, 0.1954], device='cuda:3'), in_proj_covar=tensor([0.0352, 0.0280, 0.0306, 0.0303, 0.0330, 0.0417, 0.0276, 0.0409], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0002, 0.0004], device='cuda:3') 2023-03-09 12:30:04,228 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69368.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:30:10,767 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.810e+02 2.793e+02 3.155e+02 4.197e+02 1.182e+03, threshold=6.310e+02, percent-clipped=4.0 2023-03-09 12:30:19,661 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69381.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:30:28,847 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=69389.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:30:34,517 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69394.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:30:37,562 INFO [train.py:898] (3/4) Epoch 20, batch 350, loss[loss=0.1478, simple_loss=0.2301, pruned_loss=0.03272, over 18490.00 frames. ], tot_loss[loss=0.1656, simple_loss=0.2546, pruned_loss=0.03825, over 2985692.25 frames. ], batch size: 44, lr: 5.71e-03, grad_scale: 4.0 2023-03-09 12:30:40,736 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69399.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:31:16,155 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69429.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:31:36,510 INFO [train.py:898] (3/4) Epoch 20, batch 400, loss[loss=0.1773, simple_loss=0.2705, pruned_loss=0.04202, over 18473.00 frames. ], tot_loss[loss=0.165, simple_loss=0.2544, pruned_loss=0.03785, over 3129666.99 frames. ], batch size: 59, lr: 5.70e-03, grad_scale: 8.0 2023-03-09 12:31:47,145 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69455.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:32:00,602 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4471, 3.3415, 2.1397, 4.2044, 2.9280, 4.0520, 2.1406, 3.7997], device='cuda:3'), covar=tensor([0.0653, 0.0862, 0.1466, 0.0562, 0.0922, 0.0338, 0.1439, 0.0411], device='cuda:3'), in_proj_covar=tensor([0.0212, 0.0223, 0.0188, 0.0281, 0.0193, 0.0262, 0.0203, 0.0199], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 12:32:08,922 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.671e+02 2.636e+02 3.143e+02 3.755e+02 6.655e+02, threshold=6.287e+02, percent-clipped=1.0 2023-03-09 12:32:35,609 INFO [train.py:898] (3/4) Epoch 20, batch 450, loss[loss=0.1669, simple_loss=0.259, pruned_loss=0.03738, over 18267.00 frames. ], tot_loss[loss=0.1648, simple_loss=0.2539, pruned_loss=0.03789, over 3245419.65 frames. ], batch size: 57, lr: 5.70e-03, grad_scale: 8.0 2023-03-09 12:33:00,446 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69518.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:33:07,376 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69524.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:33:17,756 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3150, 5.8140, 5.4008, 5.6236, 5.3832, 5.2443, 5.8725, 5.8028], device='cuda:3'), covar=tensor([0.1156, 0.0710, 0.0551, 0.0682, 0.1427, 0.0693, 0.0571, 0.0675], device='cuda:3'), in_proj_covar=tensor([0.0607, 0.0518, 0.0382, 0.0546, 0.0739, 0.0542, 0.0737, 0.0559], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 12:33:26,333 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7266, 3.6049, 4.9524, 4.2306, 3.3516, 2.8649, 4.4466, 5.1508], device='cuda:3'), covar=tensor([0.0772, 0.1475, 0.0151, 0.0426, 0.0908, 0.1172, 0.0344, 0.0172], device='cuda:3'), in_proj_covar=tensor([0.0147, 0.0273, 0.0146, 0.0180, 0.0191, 0.0188, 0.0192, 0.0191], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 12:33:33,752 INFO [train.py:898] (3/4) Epoch 20, batch 500, loss[loss=0.1782, simple_loss=0.2731, pruned_loss=0.04164, over 18390.00 frames. ], tot_loss[loss=0.1653, simple_loss=0.2541, pruned_loss=0.03825, over 3315999.91 frames. ], batch size: 52, lr: 5.70e-03, grad_scale: 8.0 2023-03-09 12:33:47,187 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6224, 2.9385, 4.2065, 3.7245, 2.6139, 4.5074, 3.8554, 2.9465], device='cuda:3'), covar=tensor([0.0500, 0.1426, 0.0271, 0.0380, 0.1535, 0.0217, 0.0556, 0.0970], device='cuda:3'), in_proj_covar=tensor([0.0210, 0.0238, 0.0201, 0.0159, 0.0223, 0.0208, 0.0242, 0.0198], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 12:34:05,764 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.904e+02 2.776e+02 3.162e+02 3.774e+02 8.469e+02, threshold=6.325e+02, percent-clipped=1.0 2023-03-09 12:34:14,479 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69581.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:34:32,488 INFO [train.py:898] (3/4) Epoch 20, batch 550, loss[loss=0.1526, simple_loss=0.2442, pruned_loss=0.03054, over 18388.00 frames. ], tot_loss[loss=0.1653, simple_loss=0.2544, pruned_loss=0.03813, over 3379886.83 frames. ], batch size: 46, lr: 5.70e-03, grad_scale: 4.0 2023-03-09 12:34:37,597 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69601.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 12:34:40,042 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6791, 3.6198, 4.9556, 4.2838, 3.1985, 2.8448, 4.3751, 5.1393], device='cuda:3'), covar=tensor([0.0793, 0.1343, 0.0177, 0.0367, 0.0946, 0.1156, 0.0369, 0.0196], device='cuda:3'), in_proj_covar=tensor([0.0147, 0.0272, 0.0146, 0.0180, 0.0191, 0.0188, 0.0192, 0.0191], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 12:34:48,744 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-09 12:35:10,081 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69629.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 12:35:15,445 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9386, 3.6606, 5.0815, 2.8182, 4.3651, 2.5906, 3.1964, 1.7586], device='cuda:3'), covar=tensor([0.1147, 0.0942, 0.0175, 0.0982, 0.0588, 0.2545, 0.2775, 0.2227], device='cuda:3'), in_proj_covar=tensor([0.0216, 0.0241, 0.0179, 0.0192, 0.0252, 0.0267, 0.0318, 0.0231], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 12:35:31,660 INFO [train.py:898] (3/4) Epoch 20, batch 600, loss[loss=0.1752, simple_loss=0.2711, pruned_loss=0.03967, over 18327.00 frames. ], tot_loss[loss=0.1663, simple_loss=0.2552, pruned_loss=0.03867, over 3422204.21 frames. ], batch size: 56, lr: 5.69e-03, grad_scale: 4.0 2023-03-09 12:36:04,153 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69674.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 12:36:04,883 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.123e+02 2.772e+02 3.241e+02 3.888e+02 6.801e+02, threshold=6.482e+02, percent-clipped=2.0 2023-03-09 12:36:06,289 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69676.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:36:14,578 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.43 vs. limit=5.0 2023-03-09 12:36:29,581 INFO [train.py:898] (3/4) Epoch 20, batch 650, loss[loss=0.1395, simple_loss=0.223, pruned_loss=0.02801, over 18439.00 frames. ], tot_loss[loss=0.1667, simple_loss=0.2558, pruned_loss=0.03879, over 3471518.52 frames. ], batch size: 42, lr: 5.69e-03, grad_scale: 4.0 2023-03-09 12:36:32,777 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69699.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:36:35,330 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-09 12:36:42,009 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8580, 3.0868, 4.4576, 3.9074, 2.8539, 4.8170, 3.9650, 3.1569], device='cuda:3'), covar=tensor([0.0413, 0.1280, 0.0294, 0.0369, 0.1382, 0.0180, 0.0602, 0.0848], device='cuda:3'), in_proj_covar=tensor([0.0208, 0.0237, 0.0202, 0.0159, 0.0221, 0.0207, 0.0243, 0.0196], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 12:37:01,626 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69724.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:37:14,033 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69735.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 12:37:28,028 INFO [train.py:898] (3/4) Epoch 20, batch 700, loss[loss=0.166, simple_loss=0.26, pruned_loss=0.03604, over 18477.00 frames. ], tot_loss[loss=0.1662, simple_loss=0.2556, pruned_loss=0.03843, over 3510406.58 frames. ], batch size: 53, lr: 5.69e-03, grad_scale: 4.0 2023-03-09 12:37:28,239 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=69747.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:37:28,485 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6261, 2.8261, 2.5652, 2.9010, 3.6338, 3.5377, 3.1150, 3.0168], device='cuda:3'), covar=tensor([0.0165, 0.0252, 0.0591, 0.0380, 0.0167, 0.0162, 0.0378, 0.0295], device='cuda:3'), in_proj_covar=tensor([0.0138, 0.0131, 0.0161, 0.0155, 0.0128, 0.0114, 0.0151, 0.0153], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 12:37:31,613 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69750.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:37:41,104 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4177, 5.3971, 5.0067, 5.3937, 5.3586, 4.7430, 5.2343, 5.0100], device='cuda:3'), covar=tensor([0.0424, 0.0460, 0.1301, 0.0735, 0.0588, 0.0382, 0.0446, 0.1083], device='cuda:3'), in_proj_covar=tensor([0.0478, 0.0544, 0.0689, 0.0425, 0.0439, 0.0491, 0.0539, 0.0667], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 12:37:53,336 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.73 vs. limit=2.0 2023-03-09 12:38:00,236 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.022e+02 2.614e+02 3.096e+02 3.753e+02 7.667e+02, threshold=6.192e+02, percent-clipped=2.0 2023-03-09 12:38:26,160 INFO [train.py:898] (3/4) Epoch 20, batch 750, loss[loss=0.1678, simple_loss=0.2611, pruned_loss=0.03722, over 15787.00 frames. ], tot_loss[loss=0.1664, simple_loss=0.2559, pruned_loss=0.03842, over 3527731.11 frames. ], batch size: 94, lr: 5.69e-03, grad_scale: 4.0 2023-03-09 12:38:50,825 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69818.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:38:51,085 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5571, 2.2178, 2.5434, 2.6012, 3.1141, 4.6573, 4.5219, 3.3015], device='cuda:3'), covar=tensor([0.1857, 0.2558, 0.3024, 0.1929, 0.2467, 0.0249, 0.0421, 0.0975], device='cuda:3'), in_proj_covar=tensor([0.0298, 0.0344, 0.0378, 0.0276, 0.0390, 0.0237, 0.0293, 0.0251], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 12:38:58,121 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69824.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:39:25,614 INFO [train.py:898] (3/4) Epoch 20, batch 800, loss[loss=0.1588, simple_loss=0.2477, pruned_loss=0.03492, over 18387.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2559, pruned_loss=0.03864, over 3528228.85 frames. ], batch size: 42, lr: 5.69e-03, grad_scale: 8.0 2023-03-09 12:39:48,075 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=69866.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:39:55,176 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=69872.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:39:58,468 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.014e+02 2.750e+02 3.144e+02 3.939e+02 8.664e+02, threshold=6.288e+02, percent-clipped=4.0 2023-03-09 12:40:05,931 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69881.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:40:23,199 INFO [train.py:898] (3/4) Epoch 20, batch 850, loss[loss=0.1451, simple_loss=0.2359, pruned_loss=0.02712, over 18505.00 frames. ], tot_loss[loss=0.1667, simple_loss=0.2559, pruned_loss=0.03877, over 3529438.54 frames. ], batch size: 47, lr: 5.68e-03, grad_scale: 8.0 2023-03-09 12:40:28,229 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69901.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 12:41:01,253 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=69929.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:41:01,327 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69929.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 12:41:21,122 INFO [train.py:898] (3/4) Epoch 20, batch 900, loss[loss=0.1525, simple_loss=0.2338, pruned_loss=0.03557, over 18400.00 frames. ], tot_loss[loss=0.1659, simple_loss=0.2553, pruned_loss=0.03832, over 3555192.86 frames. ], batch size: 42, lr: 5.68e-03, grad_scale: 8.0 2023-03-09 12:41:23,622 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=69949.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 12:41:54,382 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.882e+02 2.754e+02 3.235e+02 3.952e+02 9.067e+02, threshold=6.470e+02, percent-clipped=4.0 2023-03-09 12:41:55,874 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69976.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:41:56,853 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=69977.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:42:19,880 INFO [train.py:898] (3/4) Epoch 20, batch 950, loss[loss=0.1719, simple_loss=0.2611, pruned_loss=0.04136, over 18586.00 frames. ], tot_loss[loss=0.1654, simple_loss=0.255, pruned_loss=0.03796, over 3568211.39 frames. ], batch size: 54, lr: 5.68e-03, grad_scale: 8.0 2023-03-09 12:42:56,936 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=70024.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:42:56,992 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=70024.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:43:04,212 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=70030.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 12:43:23,772 INFO [train.py:898] (3/4) Epoch 20, batch 1000, loss[loss=0.1469, simple_loss=0.2317, pruned_loss=0.0311, over 18350.00 frames. ], tot_loss[loss=0.1652, simple_loss=0.2544, pruned_loss=0.03798, over 3574757.94 frames. ], batch size: 46, lr: 5.68e-03, grad_scale: 8.0 2023-03-09 12:43:27,314 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=70050.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:43:28,471 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4901, 6.0392, 5.5223, 5.8265, 5.6290, 5.4640, 6.1120, 6.0103], device='cuda:3'), covar=tensor([0.1188, 0.0731, 0.0423, 0.0677, 0.1413, 0.0685, 0.0535, 0.0696], device='cuda:3'), in_proj_covar=tensor([0.0616, 0.0522, 0.0387, 0.0553, 0.0745, 0.0543, 0.0746, 0.0569], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 12:43:40,429 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7021, 3.6500, 5.3497, 3.2159, 4.7278, 2.7661, 3.0923, 2.0345], device='cuda:3'), covar=tensor([0.1180, 0.0912, 0.0106, 0.0670, 0.0403, 0.2346, 0.2574, 0.1956], device='cuda:3'), in_proj_covar=tensor([0.0219, 0.0242, 0.0181, 0.0194, 0.0256, 0.0269, 0.0321, 0.0233], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 12:43:43,704 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-09 12:43:52,920 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=70072.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:43:56,170 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.914e+02 2.640e+02 3.091e+02 3.737e+02 6.684e+02, threshold=6.182e+02, percent-clipped=1.0 2023-03-09 12:44:22,014 INFO [train.py:898] (3/4) Epoch 20, batch 1050, loss[loss=0.1824, simple_loss=0.2801, pruned_loss=0.04228, over 18328.00 frames. ], tot_loss[loss=0.1655, simple_loss=0.2549, pruned_loss=0.03806, over 3573799.37 frames. ], batch size: 56, lr: 5.68e-03, grad_scale: 8.0 2023-03-09 12:44:23,293 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=70098.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:44:37,181 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=70110.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:44:38,355 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8532, 4.4713, 4.6025, 3.2712, 3.6187, 3.4214, 2.6318, 2.5109], device='cuda:3'), covar=tensor([0.0226, 0.0142, 0.0071, 0.0323, 0.0381, 0.0241, 0.0772, 0.0886], device='cuda:3'), in_proj_covar=tensor([0.0069, 0.0058, 0.0061, 0.0067, 0.0088, 0.0066, 0.0076, 0.0083], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 12:45:20,004 INFO [train.py:898] (3/4) Epoch 20, batch 1100, loss[loss=0.1707, simple_loss=0.2591, pruned_loss=0.04115, over 16985.00 frames. ], tot_loss[loss=0.1658, simple_loss=0.2554, pruned_loss=0.03811, over 3570733.87 frames. ], batch size: 78, lr: 5.67e-03, grad_scale: 8.0 2023-03-09 12:45:48,216 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=70171.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:45:52,310 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.767e+02 2.847e+02 3.446e+02 4.067e+02 6.954e+02, threshold=6.891e+02, percent-clipped=3.0 2023-03-09 12:46:17,939 INFO [train.py:898] (3/4) Epoch 20, batch 1150, loss[loss=0.1642, simple_loss=0.2516, pruned_loss=0.03837, over 18305.00 frames. ], tot_loss[loss=0.1658, simple_loss=0.2551, pruned_loss=0.03826, over 3566139.98 frames. ], batch size: 49, lr: 5.67e-03, grad_scale: 8.0 2023-03-09 12:46:24,248 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0897, 4.2606, 2.6455, 4.1445, 5.2970, 2.6584, 3.7647, 4.1048], device='cuda:3'), covar=tensor([0.0145, 0.1035, 0.1415, 0.0538, 0.0068, 0.1086, 0.0657, 0.0604], device='cuda:3'), in_proj_covar=tensor([0.0161, 0.0266, 0.0203, 0.0195, 0.0123, 0.0181, 0.0215, 0.0225], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 12:47:07,973 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-09 12:47:16,743 INFO [train.py:898] (3/4) Epoch 20, batch 1200, loss[loss=0.1605, simple_loss=0.2493, pruned_loss=0.0358, over 18263.00 frames. ], tot_loss[loss=0.1659, simple_loss=0.2551, pruned_loss=0.03839, over 3566386.63 frames. ], batch size: 47, lr: 5.67e-03, grad_scale: 8.0 2023-03-09 12:47:31,538 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=70260.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:47:49,159 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.495e+02 2.757e+02 3.193e+02 3.861e+02 7.458e+02, threshold=6.386e+02, percent-clipped=1.0 2023-03-09 12:48:15,299 INFO [train.py:898] (3/4) Epoch 20, batch 1250, loss[loss=0.1686, simple_loss=0.2566, pruned_loss=0.04036, over 17821.00 frames. ], tot_loss[loss=0.166, simple_loss=0.2555, pruned_loss=0.03826, over 3571233.71 frames. ], batch size: 70, lr: 5.67e-03, grad_scale: 8.0 2023-03-09 12:48:31,552 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.92 vs. limit=5.0 2023-03-09 12:48:42,624 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=70321.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:48:53,696 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=70330.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 12:48:59,579 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7765, 2.5146, 2.7173, 2.7978, 3.4617, 5.1027, 4.9162, 3.5914], device='cuda:3'), covar=tensor([0.1699, 0.2299, 0.2916, 0.1796, 0.2051, 0.0184, 0.0337, 0.0869], device='cuda:3'), in_proj_covar=tensor([0.0295, 0.0343, 0.0375, 0.0275, 0.0386, 0.0238, 0.0293, 0.0250], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 12:49:13,049 INFO [train.py:898] (3/4) Epoch 20, batch 1300, loss[loss=0.1794, simple_loss=0.2767, pruned_loss=0.04108, over 18225.00 frames. ], tot_loss[loss=0.1665, simple_loss=0.2558, pruned_loss=0.03855, over 3583101.04 frames. ], batch size: 60, lr: 5.67e-03, grad_scale: 8.0 2023-03-09 12:49:44,657 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.016e+02 2.769e+02 3.249e+02 3.833e+02 9.851e+02, threshold=6.498e+02, percent-clipped=4.0 2023-03-09 12:49:48,780 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=70378.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 12:50:05,194 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.76 vs. limit=2.0 2023-03-09 12:50:10,106 INFO [train.py:898] (3/4) Epoch 20, batch 1350, loss[loss=0.194, simple_loss=0.2769, pruned_loss=0.05556, over 18308.00 frames. ], tot_loss[loss=0.1654, simple_loss=0.2547, pruned_loss=0.03807, over 3595577.86 frames. ], batch size: 54, lr: 5.66e-03, grad_scale: 8.0 2023-03-09 12:51:08,591 INFO [train.py:898] (3/4) Epoch 20, batch 1400, loss[loss=0.1794, simple_loss=0.2709, pruned_loss=0.04394, over 18272.00 frames. ], tot_loss[loss=0.1651, simple_loss=0.2541, pruned_loss=0.03802, over 3599513.40 frames. ], batch size: 57, lr: 5.66e-03, grad_scale: 8.0 2023-03-09 12:51:31,266 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=70466.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:51:41,138 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.979e+02 2.850e+02 3.137e+02 3.892e+02 8.751e+02, threshold=6.275e+02, percent-clipped=3.0 2023-03-09 12:52:06,377 INFO [train.py:898] (3/4) Epoch 20, batch 1450, loss[loss=0.1695, simple_loss=0.2585, pruned_loss=0.04026, over 17949.00 frames. ], tot_loss[loss=0.1651, simple_loss=0.254, pruned_loss=0.03813, over 3581184.08 frames. ], batch size: 65, lr: 5.66e-03, grad_scale: 8.0 2023-03-09 12:52:18,779 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7412, 2.4857, 4.4702, 4.0986, 2.5448, 4.7097, 3.9357, 3.0102], device='cuda:3'), covar=tensor([0.0456, 0.1981, 0.0260, 0.0279, 0.1917, 0.0273, 0.0544, 0.1136], device='cuda:3'), in_proj_covar=tensor([0.0207, 0.0236, 0.0202, 0.0159, 0.0221, 0.0208, 0.0241, 0.0195], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 12:53:04,458 INFO [train.py:898] (3/4) Epoch 20, batch 1500, loss[loss=0.1726, simple_loss=0.2624, pruned_loss=0.04135, over 17970.00 frames. ], tot_loss[loss=0.1654, simple_loss=0.2545, pruned_loss=0.0381, over 3593262.06 frames. ], batch size: 65, lr: 5.66e-03, grad_scale: 4.0 2023-03-09 12:53:38,228 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.836e+02 2.609e+02 3.412e+02 4.380e+02 8.286e+02, threshold=6.824e+02, percent-clipped=3.0 2023-03-09 12:54:03,315 INFO [train.py:898] (3/4) Epoch 20, batch 1550, loss[loss=0.1793, simple_loss=0.2669, pruned_loss=0.04581, over 18373.00 frames. ], tot_loss[loss=0.1648, simple_loss=0.2538, pruned_loss=0.03791, over 3581103.51 frames. ], batch size: 56, lr: 5.66e-03, grad_scale: 4.0 2023-03-09 12:54:25,723 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=70616.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:54:28,154 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6270, 3.6414, 3.4525, 3.0704, 3.4375, 2.7955, 2.6662, 3.7130], device='cuda:3'), covar=tensor([0.0058, 0.0090, 0.0078, 0.0139, 0.0090, 0.0173, 0.0214, 0.0058], device='cuda:3'), in_proj_covar=tensor([0.0134, 0.0154, 0.0129, 0.0183, 0.0139, 0.0176, 0.0179, 0.0115], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 12:55:02,195 INFO [train.py:898] (3/4) Epoch 20, batch 1600, loss[loss=0.1982, simple_loss=0.2779, pruned_loss=0.05921, over 12544.00 frames. ], tot_loss[loss=0.1642, simple_loss=0.2533, pruned_loss=0.03756, over 3591042.62 frames. ], batch size: 129, lr: 5.65e-03, grad_scale: 8.0 2023-03-09 12:55:35,595 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.948e+02 2.737e+02 3.118e+02 3.818e+02 6.822e+02, threshold=6.236e+02, percent-clipped=0.0 2023-03-09 12:55:59,467 INFO [train.py:898] (3/4) Epoch 20, batch 1650, loss[loss=0.155, simple_loss=0.2412, pruned_loss=0.03439, over 18254.00 frames. ], tot_loss[loss=0.165, simple_loss=0.2542, pruned_loss=0.03788, over 3585749.26 frames. ], batch size: 45, lr: 5.65e-03, grad_scale: 8.0 2023-03-09 12:56:08,177 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0218, 3.5080, 4.7310, 4.1769, 3.1489, 4.9920, 4.2019, 3.2760], device='cuda:3'), covar=tensor([0.0407, 0.1060, 0.0231, 0.0307, 0.1215, 0.0139, 0.0442, 0.0831], device='cuda:3'), in_proj_covar=tensor([0.0206, 0.0236, 0.0201, 0.0159, 0.0221, 0.0207, 0.0240, 0.0195], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 12:56:49,954 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7499, 3.4165, 4.8122, 4.2755, 3.1736, 2.8057, 4.0653, 4.9909], device='cuda:3'), covar=tensor([0.0875, 0.1597, 0.0183, 0.0382, 0.0963, 0.1236, 0.0449, 0.0228], device='cuda:3'), in_proj_covar=tensor([0.0147, 0.0271, 0.0148, 0.0179, 0.0190, 0.0188, 0.0193, 0.0192], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 12:56:57,278 INFO [train.py:898] (3/4) Epoch 20, batch 1700, loss[loss=0.1897, simple_loss=0.2861, pruned_loss=0.04659, over 18476.00 frames. ], tot_loss[loss=0.1646, simple_loss=0.2539, pruned_loss=0.03759, over 3586323.85 frames. ], batch size: 59, lr: 5.65e-03, grad_scale: 8.0 2023-03-09 12:57:00,596 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.59 vs. limit=5.0 2023-03-09 12:57:20,604 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=70766.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:57:31,212 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.987e+02 2.775e+02 3.401e+02 4.011e+02 8.447e+02, threshold=6.802e+02, percent-clipped=3.0 2023-03-09 12:57:38,107 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-09 12:57:41,386 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6685, 2.3278, 2.6153, 2.5233, 3.1585, 4.6643, 4.4662, 3.3918], device='cuda:3'), covar=tensor([0.1739, 0.2415, 0.2786, 0.1954, 0.2254, 0.0231, 0.0404, 0.0866], device='cuda:3'), in_proj_covar=tensor([0.0297, 0.0346, 0.0379, 0.0277, 0.0387, 0.0239, 0.0295, 0.0251], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:3') 2023-03-09 12:57:55,559 INFO [train.py:898] (3/4) Epoch 20, batch 1750, loss[loss=0.1758, simple_loss=0.2646, pruned_loss=0.04351, over 18208.00 frames. ], tot_loss[loss=0.1652, simple_loss=0.2545, pruned_loss=0.0379, over 3584051.92 frames. ], batch size: 60, lr: 5.65e-03, grad_scale: 8.0 2023-03-09 12:58:16,251 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=70814.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 12:58:31,716 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1945, 5.5837, 2.8560, 5.3800, 5.2659, 5.5794, 5.3698, 2.9829], device='cuda:3'), covar=tensor([0.0163, 0.0059, 0.0689, 0.0058, 0.0061, 0.0059, 0.0082, 0.0820], device='cuda:3'), in_proj_covar=tensor([0.0086, 0.0080, 0.0095, 0.0095, 0.0084, 0.0075, 0.0084, 0.0096], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 12:58:46,102 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-09 12:58:54,608 INFO [train.py:898] (3/4) Epoch 20, batch 1800, loss[loss=0.1559, simple_loss=0.2489, pruned_loss=0.03141, over 18488.00 frames. ], tot_loss[loss=0.1647, simple_loss=0.2539, pruned_loss=0.03777, over 3587637.99 frames. ], batch size: 53, lr: 5.65e-03, grad_scale: 8.0 2023-03-09 12:59:28,905 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.702e+02 2.663e+02 3.014e+02 3.631e+02 8.036e+02, threshold=6.028e+02, percent-clipped=1.0 2023-03-09 12:59:36,831 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6074, 2.3908, 2.5187, 2.5304, 3.2647, 4.9249, 4.8249, 3.7053], device='cuda:3'), covar=tensor([0.1913, 0.2575, 0.3133, 0.2082, 0.2408, 0.0259, 0.0341, 0.0812], device='cuda:3'), in_proj_covar=tensor([0.0295, 0.0344, 0.0377, 0.0276, 0.0385, 0.0239, 0.0293, 0.0250], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 12:59:41,822 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-09 12:59:53,193 INFO [train.py:898] (3/4) Epoch 20, batch 1850, loss[loss=0.1596, simple_loss=0.2466, pruned_loss=0.0363, over 18556.00 frames. ], tot_loss[loss=0.1649, simple_loss=0.2537, pruned_loss=0.03801, over 3583621.75 frames. ], batch size: 49, lr: 5.64e-03, grad_scale: 8.0 2023-03-09 13:00:16,576 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9057, 5.0217, 5.0816, 4.7761, 4.7946, 4.7759, 5.0978, 5.1451], device='cuda:3'), covar=tensor([0.0074, 0.0061, 0.0052, 0.0100, 0.0055, 0.0149, 0.0069, 0.0085], device='cuda:3'), in_proj_covar=tensor([0.0094, 0.0070, 0.0073, 0.0092, 0.0075, 0.0104, 0.0086, 0.0086], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 13:00:16,592 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=70916.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:00:51,682 INFO [train.py:898] (3/4) Epoch 20, batch 1900, loss[loss=0.1605, simple_loss=0.2476, pruned_loss=0.03669, over 18480.00 frames. ], tot_loss[loss=0.1652, simple_loss=0.2543, pruned_loss=0.038, over 3582211.66 frames. ], batch size: 51, lr: 5.64e-03, grad_scale: 8.0 2023-03-09 13:00:59,109 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5322, 2.7100, 2.5297, 2.8514, 3.5562, 3.4500, 3.0584, 2.8525], device='cuda:3'), covar=tensor([0.0189, 0.0322, 0.0578, 0.0436, 0.0196, 0.0185, 0.0435, 0.0429], device='cuda:3'), in_proj_covar=tensor([0.0139, 0.0130, 0.0159, 0.0154, 0.0127, 0.0113, 0.0149, 0.0153], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 13:01:08,986 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.79 vs. limit=5.0 2023-03-09 13:01:11,884 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=70964.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:01:26,017 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.590e+02 2.797e+02 3.325e+02 3.887e+02 8.370e+02, threshold=6.650e+02, percent-clipped=5.0 2023-03-09 13:01:44,864 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=70992.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:01:50,098 INFO [train.py:898] (3/4) Epoch 20, batch 1950, loss[loss=0.1814, simple_loss=0.2764, pruned_loss=0.04317, over 18350.00 frames. ], tot_loss[loss=0.1656, simple_loss=0.2549, pruned_loss=0.03814, over 3582083.07 frames. ], batch size: 55, lr: 5.64e-03, grad_scale: 8.0 2023-03-09 13:02:21,302 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8942, 3.7901, 3.9482, 3.7869, 3.7488, 3.8682, 3.8893, 3.9740], device='cuda:3'), covar=tensor([0.0117, 0.0124, 0.0122, 0.0142, 0.0121, 0.0164, 0.0111, 0.0129], device='cuda:3'), in_proj_covar=tensor([0.0094, 0.0070, 0.0073, 0.0092, 0.0075, 0.0104, 0.0087, 0.0086], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 13:02:28,762 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=71030.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:02:47,804 INFO [train.py:898] (3/4) Epoch 20, batch 2000, loss[loss=0.1999, simple_loss=0.2779, pruned_loss=0.06089, over 12875.00 frames. ], tot_loss[loss=0.1655, simple_loss=0.2545, pruned_loss=0.03825, over 3570897.68 frames. ], batch size: 130, lr: 5.64e-03, grad_scale: 8.0 2023-03-09 13:02:54,854 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=71053.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:03:14,028 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-09 13:03:21,387 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.116e+02 2.863e+02 3.373e+02 3.932e+02 6.166e+02, threshold=6.746e+02, percent-clipped=0.0 2023-03-09 13:03:40,000 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=71091.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:03:42,765 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.73 vs. limit=2.0 2023-03-09 13:03:46,422 INFO [train.py:898] (3/4) Epoch 20, batch 2050, loss[loss=0.1676, simple_loss=0.2591, pruned_loss=0.03808, over 18235.00 frames. ], tot_loss[loss=0.1662, simple_loss=0.2553, pruned_loss=0.03856, over 3561092.89 frames. ], batch size: 47, lr: 5.64e-03, grad_scale: 8.0 2023-03-09 13:04:45,491 INFO [train.py:898] (3/4) Epoch 20, batch 2100, loss[loss=0.2, simple_loss=0.2851, pruned_loss=0.05739, over 18208.00 frames. ], tot_loss[loss=0.1661, simple_loss=0.2551, pruned_loss=0.03853, over 3565009.39 frames. ], batch size: 60, lr: 5.63e-03, grad_scale: 8.0 2023-03-09 13:05:19,432 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.825e+02 2.628e+02 3.039e+02 3.790e+02 1.100e+03, threshold=6.078e+02, percent-clipped=2.0 2023-03-09 13:05:44,230 INFO [train.py:898] (3/4) Epoch 20, batch 2150, loss[loss=0.156, simple_loss=0.2455, pruned_loss=0.03321, over 18500.00 frames. ], tot_loss[loss=0.1663, simple_loss=0.2559, pruned_loss=0.03839, over 3564953.38 frames. ], batch size: 47, lr: 5.63e-03, grad_scale: 8.0 2023-03-09 13:05:49,493 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0760, 4.2308, 2.6904, 4.2092, 5.3156, 2.6382, 3.8047, 4.0265], device='cuda:3'), covar=tensor([0.0168, 0.1183, 0.1492, 0.0602, 0.0075, 0.1263, 0.0718, 0.0715], device='cuda:3'), in_proj_covar=tensor([0.0162, 0.0268, 0.0204, 0.0195, 0.0124, 0.0182, 0.0215, 0.0224], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 13:05:57,470 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6646, 3.6816, 3.5453, 3.1657, 3.3856, 2.8274, 2.8388, 3.7895], device='cuda:3'), covar=tensor([0.0064, 0.0081, 0.0069, 0.0134, 0.0097, 0.0188, 0.0187, 0.0044], device='cuda:3'), in_proj_covar=tensor([0.0135, 0.0155, 0.0129, 0.0183, 0.0140, 0.0177, 0.0179, 0.0116], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 13:06:43,300 INFO [train.py:898] (3/4) Epoch 20, batch 2200, loss[loss=0.1693, simple_loss=0.2582, pruned_loss=0.04023, over 18395.00 frames. ], tot_loss[loss=0.1656, simple_loss=0.2549, pruned_loss=0.03822, over 3577954.67 frames. ], batch size: 52, lr: 5.63e-03, grad_scale: 8.0 2023-03-09 13:07:09,058 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-09 13:07:16,426 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.980e+02 2.743e+02 3.244e+02 3.938e+02 1.174e+03, threshold=6.489e+02, percent-clipped=4.0 2023-03-09 13:07:41,312 INFO [train.py:898] (3/4) Epoch 20, batch 2250, loss[loss=0.1775, simple_loss=0.2671, pruned_loss=0.04402, over 18558.00 frames. ], tot_loss[loss=0.1657, simple_loss=0.255, pruned_loss=0.03825, over 3588074.20 frames. ], batch size: 54, lr: 5.63e-03, grad_scale: 8.0 2023-03-09 13:08:01,316 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4545, 5.2062, 5.6851, 5.5585, 5.3530, 6.1332, 5.7750, 5.4288], device='cuda:3'), covar=tensor([0.1091, 0.0639, 0.0651, 0.0704, 0.1286, 0.0668, 0.0575, 0.1726], device='cuda:3'), in_proj_covar=tensor([0.0356, 0.0284, 0.0306, 0.0304, 0.0330, 0.0417, 0.0280, 0.0411], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0002, 0.0004], device='cuda:3') 2023-03-09 13:08:18,124 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.66 vs. limit=2.0 2023-03-09 13:08:40,029 INFO [train.py:898] (3/4) Epoch 20, batch 2300, loss[loss=0.1629, simple_loss=0.2534, pruned_loss=0.03614, over 18303.00 frames. ], tot_loss[loss=0.1658, simple_loss=0.2551, pruned_loss=0.03819, over 3598548.71 frames. ], batch size: 54, lr: 5.63e-03, grad_scale: 8.0 2023-03-09 13:08:41,397 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=71348.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:09:13,545 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.760e+02 2.586e+02 3.152e+02 3.675e+02 6.468e+02, threshold=6.303e+02, percent-clipped=0.0 2023-03-09 13:09:25,916 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=71386.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:09:30,562 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7255, 5.2772, 5.2369, 5.2128, 4.7006, 5.0966, 4.5275, 5.0874], device='cuda:3'), covar=tensor([0.0245, 0.0260, 0.0217, 0.0468, 0.0427, 0.0239, 0.1153, 0.0330], device='cuda:3'), in_proj_covar=tensor([0.0212, 0.0260, 0.0250, 0.0325, 0.0265, 0.0269, 0.0308, 0.0257], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 13:09:38,250 INFO [train.py:898] (3/4) Epoch 20, batch 2350, loss[loss=0.1871, simple_loss=0.2798, pruned_loss=0.04719, over 17166.00 frames. ], tot_loss[loss=0.165, simple_loss=0.2545, pruned_loss=0.03774, over 3604486.17 frames. ], batch size: 78, lr: 5.62e-03, grad_scale: 8.0 2023-03-09 13:10:01,509 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8258, 4.0424, 2.5598, 4.0629, 5.0794, 2.4704, 3.6566, 3.8479], device='cuda:3'), covar=tensor([0.0157, 0.1136, 0.1516, 0.0570, 0.0077, 0.1296, 0.0698, 0.0763], device='cuda:3'), in_proj_covar=tensor([0.0161, 0.0264, 0.0202, 0.0193, 0.0123, 0.0180, 0.0212, 0.0221], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 13:10:37,017 INFO [train.py:898] (3/4) Epoch 20, batch 2400, loss[loss=0.1545, simple_loss=0.2495, pruned_loss=0.02976, over 18624.00 frames. ], tot_loss[loss=0.1643, simple_loss=0.2538, pruned_loss=0.03741, over 3585775.58 frames. ], batch size: 52, lr: 5.62e-03, grad_scale: 8.0 2023-03-09 13:11:10,786 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.729e+02 2.680e+02 3.169e+02 3.609e+02 6.256e+02, threshold=6.338e+02, percent-clipped=0.0 2023-03-09 13:11:12,170 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7584, 5.2697, 5.2754, 5.3347, 4.7362, 5.1800, 4.5206, 5.1392], device='cuda:3'), covar=tensor([0.0283, 0.0342, 0.0202, 0.0369, 0.0396, 0.0251, 0.1166, 0.0329], device='cuda:3'), in_proj_covar=tensor([0.0212, 0.0259, 0.0248, 0.0324, 0.0265, 0.0267, 0.0307, 0.0256], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 13:11:35,512 INFO [train.py:898] (3/4) Epoch 20, batch 2450, loss[loss=0.1619, simple_loss=0.2577, pruned_loss=0.03307, over 18341.00 frames. ], tot_loss[loss=0.1647, simple_loss=0.2542, pruned_loss=0.03762, over 3570270.25 frames. ], batch size: 55, lr: 5.62e-03, grad_scale: 8.0 2023-03-09 13:11:58,179 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5973, 5.1383, 5.1219, 5.1467, 4.6082, 5.0205, 4.4817, 4.9884], device='cuda:3'), covar=tensor([0.0265, 0.0278, 0.0193, 0.0440, 0.0389, 0.0229, 0.1094, 0.0317], device='cuda:3'), in_proj_covar=tensor([0.0212, 0.0258, 0.0247, 0.0323, 0.0264, 0.0266, 0.0306, 0.0256], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 13:12:14,585 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4889, 5.9867, 5.5375, 5.8051, 5.5809, 5.4714, 6.0542, 5.9836], device='cuda:3'), covar=tensor([0.1069, 0.0810, 0.0452, 0.0684, 0.1342, 0.0620, 0.0565, 0.0732], device='cuda:3'), in_proj_covar=tensor([0.0600, 0.0523, 0.0377, 0.0539, 0.0724, 0.0532, 0.0735, 0.0554], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 13:12:33,802 INFO [train.py:898] (3/4) Epoch 20, batch 2500, loss[loss=0.1628, simple_loss=0.2507, pruned_loss=0.03742, over 18364.00 frames. ], tot_loss[loss=0.1649, simple_loss=0.2543, pruned_loss=0.03772, over 3574121.39 frames. ], batch size: 46, lr: 5.62e-03, grad_scale: 8.0 2023-03-09 13:12:37,002 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-09 13:12:37,473 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5184, 5.5080, 5.1246, 5.4657, 5.4159, 4.8685, 5.3338, 5.1516], device='cuda:3'), covar=tensor([0.0403, 0.0415, 0.1348, 0.0779, 0.0650, 0.0370, 0.0415, 0.0993], device='cuda:3'), in_proj_covar=tensor([0.0488, 0.0552, 0.0705, 0.0436, 0.0447, 0.0501, 0.0539, 0.0678], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 13:13:05,887 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1273, 5.6496, 3.1125, 5.4253, 5.3272, 5.6512, 5.4625, 2.9308], device='cuda:3'), covar=tensor([0.0190, 0.0061, 0.0630, 0.0067, 0.0066, 0.0069, 0.0082, 0.0924], device='cuda:3'), in_proj_covar=tensor([0.0087, 0.0080, 0.0095, 0.0094, 0.0084, 0.0075, 0.0084, 0.0097], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 13:13:07,754 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.564e+02 2.655e+02 3.143e+02 3.839e+02 8.943e+02, threshold=6.287e+02, percent-clipped=3.0 2023-03-09 13:13:32,182 INFO [train.py:898] (3/4) Epoch 20, batch 2550, loss[loss=0.1761, simple_loss=0.2687, pruned_loss=0.0417, over 18495.00 frames. ], tot_loss[loss=0.1654, simple_loss=0.255, pruned_loss=0.03793, over 3579484.09 frames. ], batch size: 51, lr: 5.62e-03, grad_scale: 8.0 2023-03-09 13:13:41,229 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.69 vs. limit=5.0 2023-03-09 13:13:55,560 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=71616.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:14:31,323 INFO [train.py:898] (3/4) Epoch 20, batch 2600, loss[loss=0.1699, simple_loss=0.2575, pruned_loss=0.04115, over 17219.00 frames. ], tot_loss[loss=0.1649, simple_loss=0.2545, pruned_loss=0.03762, over 3584110.11 frames. ], batch size: 78, lr: 5.62e-03, grad_scale: 8.0 2023-03-09 13:14:32,817 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=71648.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:14:54,908 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7062, 3.6574, 4.9281, 2.8205, 4.3515, 2.6207, 3.0747, 1.8091], device='cuda:3'), covar=tensor([0.1214, 0.0931, 0.0192, 0.0923, 0.0525, 0.2557, 0.2692, 0.2135], device='cuda:3'), in_proj_covar=tensor([0.0218, 0.0243, 0.0184, 0.0194, 0.0256, 0.0269, 0.0319, 0.0232], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 13:15:05,045 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.778e+02 2.596e+02 2.940e+02 3.683e+02 1.082e+03, threshold=5.881e+02, percent-clipped=3.0 2023-03-09 13:15:06,363 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4659, 5.4617, 5.0989, 5.4275, 5.3529, 4.8163, 5.2668, 5.1032], device='cuda:3'), covar=tensor([0.0418, 0.0423, 0.1312, 0.0693, 0.0633, 0.0378, 0.0456, 0.1035], device='cuda:3'), in_proj_covar=tensor([0.0491, 0.0552, 0.0704, 0.0434, 0.0449, 0.0500, 0.0542, 0.0679], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 13:15:06,486 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=71677.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:15:16,499 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=71686.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:15:28,093 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=71696.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:15:29,061 INFO [train.py:898] (3/4) Epoch 20, batch 2650, loss[loss=0.1789, simple_loss=0.2722, pruned_loss=0.04285, over 18394.00 frames. ], tot_loss[loss=0.1661, simple_loss=0.2556, pruned_loss=0.03827, over 3596761.67 frames. ], batch size: 52, lr: 5.61e-03, grad_scale: 8.0 2023-03-09 13:15:56,597 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8281, 3.0048, 2.4035, 3.1495, 3.7953, 3.6841, 3.3495, 3.1418], device='cuda:3'), covar=tensor([0.0174, 0.0253, 0.0712, 0.0307, 0.0196, 0.0153, 0.0295, 0.0303], device='cuda:3'), in_proj_covar=tensor([0.0140, 0.0131, 0.0161, 0.0154, 0.0128, 0.0114, 0.0150, 0.0153], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 13:16:12,124 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=71734.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:16:27,161 INFO [train.py:898] (3/4) Epoch 20, batch 2700, loss[loss=0.1696, simple_loss=0.2634, pruned_loss=0.03789, over 18387.00 frames. ], tot_loss[loss=0.1661, simple_loss=0.2557, pruned_loss=0.03821, over 3596618.45 frames. ], batch size: 52, lr: 5.61e-03, grad_scale: 8.0 2023-03-09 13:16:42,603 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3952, 5.8561, 5.4519, 5.6781, 5.4025, 5.3968, 5.9241, 5.8228], device='cuda:3'), covar=tensor([0.1152, 0.0836, 0.0474, 0.0751, 0.1470, 0.0616, 0.0552, 0.0787], device='cuda:3'), in_proj_covar=tensor([0.0609, 0.0528, 0.0383, 0.0550, 0.0739, 0.0537, 0.0742, 0.0561], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 13:16:49,191 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6821, 2.7968, 2.7053, 3.0749, 3.7305, 3.6096, 3.3156, 3.0723], device='cuda:3'), covar=tensor([0.0172, 0.0289, 0.0551, 0.0373, 0.0191, 0.0177, 0.0348, 0.0343], device='cuda:3'), in_proj_covar=tensor([0.0139, 0.0131, 0.0160, 0.0152, 0.0127, 0.0113, 0.0149, 0.0152], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 13:17:01,229 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.908e+02 2.794e+02 3.196e+02 3.940e+02 6.442e+02, threshold=6.393e+02, percent-clipped=2.0 2023-03-09 13:17:25,386 INFO [train.py:898] (3/4) Epoch 20, batch 2750, loss[loss=0.1654, simple_loss=0.2531, pruned_loss=0.03888, over 18412.00 frames. ], tot_loss[loss=0.1657, simple_loss=0.2555, pruned_loss=0.03795, over 3602231.76 frames. ], batch size: 48, lr: 5.61e-03, grad_scale: 8.0 2023-03-09 13:18:23,265 INFO [train.py:898] (3/4) Epoch 20, batch 2800, loss[loss=0.2121, simple_loss=0.2875, pruned_loss=0.06834, over 12795.00 frames. ], tot_loss[loss=0.165, simple_loss=0.2547, pruned_loss=0.03767, over 3599055.56 frames. ], batch size: 129, lr: 5.61e-03, grad_scale: 8.0 2023-03-09 13:18:56,840 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.812e+02 2.512e+02 3.220e+02 3.992e+02 1.001e+03, threshold=6.440e+02, percent-clipped=3.0 2023-03-09 13:19:22,130 INFO [train.py:898] (3/4) Epoch 20, batch 2850, loss[loss=0.1588, simple_loss=0.243, pruned_loss=0.03725, over 18369.00 frames. ], tot_loss[loss=0.1649, simple_loss=0.2543, pruned_loss=0.03769, over 3592459.35 frames. ], batch size: 50, lr: 5.61e-03, grad_scale: 8.0 2023-03-09 13:19:49,026 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=71919.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:19:49,045 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1817, 2.5651, 2.3051, 2.6671, 3.3117, 3.1208, 2.8537, 2.7369], device='cuda:3'), covar=tensor([0.0187, 0.0299, 0.0586, 0.0426, 0.0208, 0.0186, 0.0406, 0.0347], device='cuda:3'), in_proj_covar=tensor([0.0140, 0.0132, 0.0161, 0.0155, 0.0128, 0.0115, 0.0152, 0.0153], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 13:19:56,113 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-09 13:20:16,556 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-09 13:20:21,472 INFO [train.py:898] (3/4) Epoch 20, batch 2900, loss[loss=0.1633, simple_loss=0.2529, pruned_loss=0.0369, over 18407.00 frames. ], tot_loss[loss=0.1651, simple_loss=0.2548, pruned_loss=0.03771, over 3584610.66 frames. ], batch size: 48, lr: 5.60e-03, grad_scale: 8.0 2023-03-09 13:20:51,451 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=71972.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:20:55,782 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.736e+02 2.545e+02 2.901e+02 3.371e+02 5.685e+02, threshold=5.802e+02, percent-clipped=0.0 2023-03-09 13:20:57,415 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6523, 3.6796, 4.9305, 2.7979, 4.3404, 2.6192, 3.0953, 1.7285], device='cuda:3'), covar=tensor([0.1251, 0.0902, 0.0180, 0.0972, 0.0597, 0.2552, 0.2684, 0.2248], device='cuda:3'), in_proj_covar=tensor([0.0219, 0.0244, 0.0185, 0.0195, 0.0257, 0.0270, 0.0320, 0.0234], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 13:21:00,791 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=71980.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:21:15,727 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-09 13:21:20,629 INFO [train.py:898] (3/4) Epoch 20, batch 2950, loss[loss=0.1913, simple_loss=0.2781, pruned_loss=0.05222, over 18463.00 frames. ], tot_loss[loss=0.1653, simple_loss=0.2553, pruned_loss=0.0377, over 3592091.66 frames. ], batch size: 59, lr: 5.60e-03, grad_scale: 8.0 2023-03-09 13:21:53,073 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7771, 4.4748, 4.4850, 3.3689, 3.7294, 3.3399, 2.6332, 2.2444], device='cuda:3'), covar=tensor([0.0228, 0.0142, 0.0077, 0.0290, 0.0339, 0.0232, 0.0665, 0.0928], device='cuda:3'), in_proj_covar=tensor([0.0070, 0.0059, 0.0062, 0.0068, 0.0088, 0.0066, 0.0076, 0.0084], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 13:21:59,048 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-09 13:22:24,558 INFO [train.py:898] (3/4) Epoch 20, batch 3000, loss[loss=0.1881, simple_loss=0.275, pruned_loss=0.05056, over 18375.00 frames. ], tot_loss[loss=0.1647, simple_loss=0.2547, pruned_loss=0.0374, over 3602885.85 frames. ], batch size: 56, lr: 5.60e-03, grad_scale: 8.0 2023-03-09 13:22:24,559 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 13:22:36,475 INFO [train.py:932] (3/4) Epoch 20, validation: loss=0.1501, simple_loss=0.25, pruned_loss=0.02514, over 944034.00 frames. 2023-03-09 13:22:36,476 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 13:23:10,305 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.792e+02 2.528e+02 3.006e+02 3.462e+02 4.966e+02, threshold=6.013e+02, percent-clipped=0.0 2023-03-09 13:23:33,870 INFO [train.py:898] (3/4) Epoch 20, batch 3050, loss[loss=0.1816, simple_loss=0.2671, pruned_loss=0.04807, over 18346.00 frames. ], tot_loss[loss=0.164, simple_loss=0.2538, pruned_loss=0.03704, over 3607223.28 frames. ], batch size: 56, lr: 5.60e-03, grad_scale: 8.0 2023-03-09 13:23:55,454 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5817, 6.0466, 5.5465, 5.8530, 5.6201, 5.5438, 6.0968, 5.9941], device='cuda:3'), covar=tensor([0.1064, 0.0744, 0.0442, 0.0709, 0.1347, 0.0644, 0.0535, 0.0697], device='cuda:3'), in_proj_covar=tensor([0.0607, 0.0523, 0.0379, 0.0546, 0.0736, 0.0536, 0.0734, 0.0558], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 13:24:11,916 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=72129.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:24:31,796 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-09 13:24:31,891 INFO [train.py:898] (3/4) Epoch 20, batch 3100, loss[loss=0.1738, simple_loss=0.2616, pruned_loss=0.04304, over 18271.00 frames. ], tot_loss[loss=0.1641, simple_loss=0.2538, pruned_loss=0.03718, over 3616632.12 frames. ], batch size: 57, lr: 5.60e-03, grad_scale: 8.0 2023-03-09 13:25:05,468 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.922e+02 2.879e+02 3.465e+02 4.058e+02 1.741e+03, threshold=6.931e+02, percent-clipped=3.0 2023-03-09 13:25:22,412 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=72190.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:25:29,926 INFO [train.py:898] (3/4) Epoch 20, batch 3150, loss[loss=0.1515, simple_loss=0.238, pruned_loss=0.0325, over 18391.00 frames. ], tot_loss[loss=0.1646, simple_loss=0.2544, pruned_loss=0.03742, over 3611312.94 frames. ], batch size: 46, lr: 5.59e-03, grad_scale: 8.0 2023-03-09 13:25:34,420 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-09 13:26:28,293 INFO [train.py:898] (3/4) Epoch 20, batch 3200, loss[loss=0.1538, simple_loss=0.2357, pruned_loss=0.0359, over 18486.00 frames. ], tot_loss[loss=0.1641, simple_loss=0.2533, pruned_loss=0.03743, over 3596808.71 frames. ], batch size: 47, lr: 5.59e-03, grad_scale: 8.0 2023-03-09 13:26:58,298 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=72272.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:27:01,434 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=72275.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:27:02,249 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.897e+02 2.541e+02 3.039e+02 3.727e+02 6.894e+02, threshold=6.078e+02, percent-clipped=0.0 2023-03-09 13:27:26,962 INFO [train.py:898] (3/4) Epoch 20, batch 3250, loss[loss=0.1415, simple_loss=0.2257, pruned_loss=0.0287, over 18504.00 frames. ], tot_loss[loss=0.1639, simple_loss=0.2532, pruned_loss=0.03726, over 3600889.62 frames. ], batch size: 44, lr: 5.59e-03, grad_scale: 8.0 2023-03-09 13:27:35,420 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0069, 4.6746, 4.7461, 3.6680, 3.8425, 3.6423, 2.9049, 2.6735], device='cuda:3'), covar=tensor([0.0162, 0.0115, 0.0061, 0.0236, 0.0329, 0.0185, 0.0621, 0.0730], device='cuda:3'), in_proj_covar=tensor([0.0068, 0.0057, 0.0060, 0.0066, 0.0086, 0.0064, 0.0075, 0.0081], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 13:27:54,549 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=72320.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:28:01,609 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=72326.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:28:09,223 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-09 13:28:12,877 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1067, 5.1727, 5.1764, 4.9079, 4.9849, 4.9961, 5.3176, 5.2737], device='cuda:3'), covar=tensor([0.0066, 0.0065, 0.0059, 0.0103, 0.0053, 0.0141, 0.0066, 0.0084], device='cuda:3'), in_proj_covar=tensor([0.0092, 0.0069, 0.0072, 0.0091, 0.0074, 0.0102, 0.0086, 0.0085], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 13:28:17,784 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-09 13:28:26,292 INFO [train.py:898] (3/4) Epoch 20, batch 3300, loss[loss=0.1445, simple_loss=0.2241, pruned_loss=0.0324, over 18449.00 frames. ], tot_loss[loss=0.1633, simple_loss=0.2525, pruned_loss=0.03703, over 3601264.52 frames. ], batch size: 43, lr: 5.59e-03, grad_scale: 8.0 2023-03-09 13:28:59,550 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.671e+02 2.585e+02 3.093e+02 3.745e+02 6.095e+02, threshold=6.186e+02, percent-clipped=1.0 2023-03-09 13:29:12,949 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=72387.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:29:24,302 INFO [train.py:898] (3/4) Epoch 20, batch 3350, loss[loss=0.1569, simple_loss=0.2385, pruned_loss=0.03765, over 18381.00 frames. ], tot_loss[loss=0.1638, simple_loss=0.2531, pruned_loss=0.03728, over 3600575.39 frames. ], batch size: 42, lr: 5.59e-03, grad_scale: 8.0 2023-03-09 13:29:30,614 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5254, 5.4801, 5.1386, 5.4584, 5.3907, 4.8787, 5.3314, 5.1069], device='cuda:3'), covar=tensor([0.0385, 0.0413, 0.1302, 0.0719, 0.0605, 0.0385, 0.0397, 0.0990], device='cuda:3'), in_proj_covar=tensor([0.0493, 0.0557, 0.0704, 0.0434, 0.0447, 0.0507, 0.0545, 0.0678], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 13:29:55,170 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-09 13:30:06,238 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4614, 2.7663, 2.5111, 2.8636, 3.6034, 3.4852, 3.0543, 2.9424], device='cuda:3'), covar=tensor([0.0174, 0.0275, 0.0556, 0.0401, 0.0185, 0.0165, 0.0377, 0.0352], device='cuda:3'), in_proj_covar=tensor([0.0140, 0.0130, 0.0160, 0.0154, 0.0128, 0.0114, 0.0151, 0.0154], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 13:30:23,821 INFO [train.py:898] (3/4) Epoch 20, batch 3400, loss[loss=0.1552, simple_loss=0.2353, pruned_loss=0.03759, over 18358.00 frames. ], tot_loss[loss=0.1632, simple_loss=0.2521, pruned_loss=0.03711, over 3608071.46 frames. ], batch size: 46, lr: 5.58e-03, grad_scale: 8.0 2023-03-09 13:30:57,203 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.915e+02 2.809e+02 3.353e+02 3.945e+02 1.222e+03, threshold=6.706e+02, percent-clipped=5.0 2023-03-09 13:31:07,774 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=72485.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:31:22,253 INFO [train.py:898] (3/4) Epoch 20, batch 3450, loss[loss=0.1782, simple_loss=0.2748, pruned_loss=0.0408, over 18340.00 frames. ], tot_loss[loss=0.1636, simple_loss=0.2527, pruned_loss=0.03723, over 3599315.84 frames. ], batch size: 56, lr: 5.58e-03, grad_scale: 8.0 2023-03-09 13:31:54,993 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5111, 2.6812, 4.2396, 3.5780, 2.7275, 4.4834, 3.8806, 2.6492], device='cuda:3'), covar=tensor([0.0513, 0.1532, 0.0292, 0.0448, 0.1471, 0.0215, 0.0533, 0.1112], device='cuda:3'), in_proj_covar=tensor([0.0209, 0.0237, 0.0208, 0.0162, 0.0223, 0.0210, 0.0245, 0.0198], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 13:32:20,119 INFO [train.py:898] (3/4) Epoch 20, batch 3500, loss[loss=0.1818, simple_loss=0.2763, pruned_loss=0.04364, over 18353.00 frames. ], tot_loss[loss=0.1643, simple_loss=0.2534, pruned_loss=0.03761, over 3593469.28 frames. ], batch size: 56, lr: 5.58e-03, grad_scale: 16.0 2023-03-09 13:32:26,536 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6122, 4.3025, 4.2876, 3.2889, 3.5545, 3.2983, 2.5011, 2.2071], device='cuda:3'), covar=tensor([0.0219, 0.0120, 0.0080, 0.0294, 0.0335, 0.0229, 0.0700, 0.0900], device='cuda:3'), in_proj_covar=tensor([0.0069, 0.0058, 0.0061, 0.0068, 0.0087, 0.0065, 0.0076, 0.0083], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 13:32:52,279 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=72575.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:32:53,021 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.775e+02 2.575e+02 2.979e+02 3.509e+02 6.316e+02, threshold=5.957e+02, percent-clipped=0.0 2023-03-09 13:33:00,863 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6661, 4.7714, 4.7677, 4.5201, 4.5792, 4.5311, 4.9006, 4.8878], device='cuda:3'), covar=tensor([0.0075, 0.0073, 0.0067, 0.0114, 0.0068, 0.0136, 0.0078, 0.0089], device='cuda:3'), in_proj_covar=tensor([0.0091, 0.0068, 0.0072, 0.0091, 0.0073, 0.0103, 0.0085, 0.0085], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 13:33:14,310 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.44 vs. limit=2.0 2023-03-09 13:33:16,610 INFO [train.py:898] (3/4) Epoch 20, batch 3550, loss[loss=0.1915, simple_loss=0.2857, pruned_loss=0.04863, over 18107.00 frames. ], tot_loss[loss=0.1651, simple_loss=0.2543, pruned_loss=0.03797, over 3587706.89 frames. ], batch size: 62, lr: 5.58e-03, grad_scale: 16.0 2023-03-09 13:33:45,204 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=72623.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:33:45,378 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8403, 4.8644, 4.9188, 4.6758, 4.6864, 4.6980, 5.0329, 5.0159], device='cuda:3'), covar=tensor([0.0071, 0.0072, 0.0058, 0.0106, 0.0063, 0.0136, 0.0076, 0.0092], device='cuda:3'), in_proj_covar=tensor([0.0092, 0.0069, 0.0072, 0.0091, 0.0074, 0.0103, 0.0086, 0.0086], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 13:34:10,342 INFO [train.py:898] (3/4) Epoch 20, batch 3600, loss[loss=0.1538, simple_loss=0.2476, pruned_loss=0.02998, over 18496.00 frames. ], tot_loss[loss=0.1649, simple_loss=0.2543, pruned_loss=0.03777, over 3592361.02 frames. ], batch size: 51, lr: 5.58e-03, grad_scale: 8.0 2023-03-09 13:34:19,343 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5281, 3.4645, 3.3313, 2.9697, 3.3007, 2.7433, 2.6573, 3.4966], device='cuda:3'), covar=tensor([0.0055, 0.0089, 0.0081, 0.0134, 0.0088, 0.0184, 0.0202, 0.0068], device='cuda:3'), in_proj_covar=tensor([0.0136, 0.0155, 0.0131, 0.0183, 0.0139, 0.0178, 0.0180, 0.0117], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 13:34:28,482 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-09 13:34:42,422 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.959e+02 2.531e+02 3.153e+02 3.638e+02 9.354e+02, threshold=6.307e+02, percent-clipped=0.0 2023-03-09 13:35:15,806 INFO [train.py:898] (3/4) Epoch 21, batch 0, loss[loss=0.1404, simple_loss=0.2226, pruned_loss=0.02909, over 18182.00 frames. ], tot_loss[loss=0.1404, simple_loss=0.2226, pruned_loss=0.02909, over 18182.00 frames. ], batch size: 44, lr: 5.44e-03, grad_scale: 8.0 2023-03-09 13:35:15,807 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 13:35:27,494 INFO [train.py:932] (3/4) Epoch 21, validation: loss=0.1511, simple_loss=0.2511, pruned_loss=0.02556, over 944034.00 frames. 2023-03-09 13:35:27,495 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 13:35:28,916 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=72682.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:36:07,793 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.78 vs. limit=2.0 2023-03-09 13:36:16,539 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8012, 3.2454, 3.9512, 2.7841, 3.6360, 2.6514, 2.7584, 2.2802], device='cuda:3'), covar=tensor([0.0985, 0.0896, 0.0285, 0.0805, 0.0663, 0.2221, 0.2427, 0.1709], device='cuda:3'), in_proj_covar=tensor([0.0219, 0.0244, 0.0186, 0.0196, 0.0257, 0.0271, 0.0323, 0.0233], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 13:36:26,172 INFO [train.py:898] (3/4) Epoch 21, batch 50, loss[loss=0.2367, simple_loss=0.3114, pruned_loss=0.08095, over 12924.00 frames. ], tot_loss[loss=0.1669, simple_loss=0.2559, pruned_loss=0.03898, over 792509.79 frames. ], batch size: 129, lr: 5.44e-03, grad_scale: 8.0 2023-03-09 13:37:00,151 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-09 13:37:13,054 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9428, 5.2059, 5.2000, 5.3147, 4.9474, 5.8007, 5.3869, 5.1092], device='cuda:3'), covar=tensor([0.1252, 0.0636, 0.0821, 0.0874, 0.1414, 0.0804, 0.0798, 0.1801], device='cuda:3'), in_proj_covar=tensor([0.0352, 0.0280, 0.0306, 0.0305, 0.0322, 0.0415, 0.0277, 0.0407], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0002, 0.0004], device='cuda:3') 2023-03-09 13:37:20,701 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.036e+02 2.565e+02 3.180e+02 3.639e+02 9.362e+02, threshold=6.360e+02, percent-clipped=2.0 2023-03-09 13:37:25,030 INFO [train.py:898] (3/4) Epoch 21, batch 100, loss[loss=0.1946, simple_loss=0.2719, pruned_loss=0.05861, over 12763.00 frames. ], tot_loss[loss=0.1665, simple_loss=0.2556, pruned_loss=0.03868, over 1417088.22 frames. ], batch size: 130, lr: 5.43e-03, grad_scale: 8.0 2023-03-09 13:37:29,847 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=72785.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:38:11,874 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.4370, 3.0590, 3.8628, 3.6825, 2.9891, 2.9543, 3.6646, 4.0072], device='cuda:3'), covar=tensor([0.0788, 0.1311, 0.0272, 0.0362, 0.0821, 0.0948, 0.0384, 0.0290], device='cuda:3'), in_proj_covar=tensor([0.0149, 0.0276, 0.0151, 0.0181, 0.0192, 0.0191, 0.0195, 0.0196], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 13:38:23,944 INFO [train.py:898] (3/4) Epoch 21, batch 150, loss[loss=0.1426, simple_loss=0.2251, pruned_loss=0.03006, over 18488.00 frames. ], tot_loss[loss=0.1652, simple_loss=0.2545, pruned_loss=0.03798, over 1900239.71 frames. ], batch size: 44, lr: 5.43e-03, grad_scale: 8.0 2023-03-09 13:38:26,467 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=72833.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:38:38,164 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.50 vs. limit=5.0 2023-03-09 13:39:17,915 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.843e+02 2.851e+02 3.537e+02 4.282e+02 1.325e+03, threshold=7.073e+02, percent-clipped=5.0 2023-03-09 13:39:22,609 INFO [train.py:898] (3/4) Epoch 21, batch 200, loss[loss=0.1656, simple_loss=0.2617, pruned_loss=0.03478, over 18486.00 frames. ], tot_loss[loss=0.1652, simple_loss=0.2542, pruned_loss=0.03807, over 2255976.97 frames. ], batch size: 51, lr: 5.43e-03, grad_scale: 8.0 2023-03-09 13:39:31,949 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3993, 2.7138, 2.4007, 2.7917, 3.4968, 3.4269, 2.9968, 2.9482], device='cuda:3'), covar=tensor([0.0202, 0.0276, 0.0595, 0.0414, 0.0215, 0.0181, 0.0402, 0.0358], device='cuda:3'), in_proj_covar=tensor([0.0140, 0.0131, 0.0162, 0.0155, 0.0129, 0.0115, 0.0151, 0.0154], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 13:39:37,457 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=72894.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:39:57,225 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5604, 3.3975, 2.1655, 4.3401, 2.9313, 4.2017, 2.4367, 3.8957], device='cuda:3'), covar=tensor([0.0649, 0.0844, 0.1451, 0.0513, 0.0866, 0.0332, 0.1185, 0.0430], device='cuda:3'), in_proj_covar=tensor([0.0214, 0.0227, 0.0189, 0.0283, 0.0192, 0.0263, 0.0204, 0.0198], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 13:40:21,006 INFO [train.py:898] (3/4) Epoch 21, batch 250, loss[loss=0.1918, simple_loss=0.2777, pruned_loss=0.05294, over 18482.00 frames. ], tot_loss[loss=0.1648, simple_loss=0.2544, pruned_loss=0.03764, over 2554232.45 frames. ], batch size: 59, lr: 5.43e-03, grad_scale: 8.0 2023-03-09 13:40:49,006 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=72955.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:40:59,933 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=72964.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 13:41:14,360 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.784e+02 2.579e+02 3.135e+02 3.848e+02 6.941e+02, threshold=6.270e+02, percent-clipped=0.0 2023-03-09 13:41:15,895 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5119, 2.9026, 4.1133, 3.6287, 2.5153, 4.3242, 3.8619, 2.7836], device='cuda:3'), covar=tensor([0.0484, 0.1235, 0.0278, 0.0380, 0.1554, 0.0212, 0.0519, 0.0954], device='cuda:3'), in_proj_covar=tensor([0.0208, 0.0237, 0.0208, 0.0161, 0.0222, 0.0209, 0.0244, 0.0197], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 13:41:18,868 INFO [train.py:898] (3/4) Epoch 21, batch 300, loss[loss=0.1463, simple_loss=0.2291, pruned_loss=0.03172, over 18445.00 frames. ], tot_loss[loss=0.1656, simple_loss=0.2554, pruned_loss=0.03791, over 2789021.81 frames. ], batch size: 43, lr: 5.43e-03, grad_scale: 8.0 2023-03-09 13:41:20,225 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=72982.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:42:11,315 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=73025.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 13:42:16,848 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=73030.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:42:17,824 INFO [train.py:898] (3/4) Epoch 21, batch 350, loss[loss=0.1357, simple_loss=0.22, pruned_loss=0.02566, over 18429.00 frames. ], tot_loss[loss=0.1651, simple_loss=0.2551, pruned_loss=0.03751, over 2959165.72 frames. ], batch size: 43, lr: 5.43e-03, grad_scale: 8.0 2023-03-09 13:43:11,920 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.597e+02 2.497e+02 3.036e+02 3.677e+02 5.898e+02, threshold=6.073e+02, percent-clipped=0.0 2023-03-09 13:43:16,497 INFO [train.py:898] (3/4) Epoch 21, batch 400, loss[loss=0.1695, simple_loss=0.2628, pruned_loss=0.03814, over 18612.00 frames. ], tot_loss[loss=0.1647, simple_loss=0.2545, pruned_loss=0.03739, over 3096928.77 frames. ], batch size: 52, lr: 5.42e-03, grad_scale: 8.0 2023-03-09 13:43:48,884 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6953, 2.8754, 4.4017, 3.8016, 2.6629, 4.6886, 4.0483, 3.0148], device='cuda:3'), covar=tensor([0.0473, 0.1505, 0.0275, 0.0391, 0.1608, 0.0184, 0.0573, 0.0966], device='cuda:3'), in_proj_covar=tensor([0.0205, 0.0234, 0.0206, 0.0160, 0.0219, 0.0207, 0.0241, 0.0195], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 13:43:56,472 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.52 vs. limit=5.0 2023-03-09 13:44:14,450 INFO [train.py:898] (3/4) Epoch 21, batch 450, loss[loss=0.1858, simple_loss=0.2833, pruned_loss=0.04415, over 18232.00 frames. ], tot_loss[loss=0.1656, simple_loss=0.2555, pruned_loss=0.03785, over 3207074.83 frames. ], batch size: 60, lr: 5.42e-03, grad_scale: 8.0 2023-03-09 13:44:47,160 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6237, 2.9140, 2.5624, 2.9810, 3.7322, 3.6209, 3.2497, 3.0835], device='cuda:3'), covar=tensor([0.0181, 0.0271, 0.0545, 0.0320, 0.0151, 0.0137, 0.0336, 0.0404], device='cuda:3'), in_proj_covar=tensor([0.0138, 0.0131, 0.0161, 0.0154, 0.0127, 0.0114, 0.0150, 0.0153], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 13:45:06,944 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.937e+02 2.746e+02 3.322e+02 3.984e+02 6.045e+02, threshold=6.644e+02, percent-clipped=0.0 2023-03-09 13:45:12,780 INFO [train.py:898] (3/4) Epoch 21, batch 500, loss[loss=0.1509, simple_loss=0.2413, pruned_loss=0.03022, over 18289.00 frames. ], tot_loss[loss=0.1645, simple_loss=0.2547, pruned_loss=0.03721, over 3298451.71 frames. ], batch size: 49, lr: 5.42e-03, grad_scale: 8.0 2023-03-09 13:45:38,515 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5824, 2.8104, 2.5906, 2.9600, 3.6050, 3.5610, 3.1270, 2.9593], device='cuda:3'), covar=tensor([0.0223, 0.0311, 0.0487, 0.0319, 0.0213, 0.0166, 0.0333, 0.0367], device='cuda:3'), in_proj_covar=tensor([0.0140, 0.0133, 0.0162, 0.0155, 0.0129, 0.0115, 0.0152, 0.0154], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 13:46:10,611 INFO [train.py:898] (3/4) Epoch 21, batch 550, loss[loss=0.14, simple_loss=0.2271, pruned_loss=0.02646, over 18379.00 frames. ], tot_loss[loss=0.1642, simple_loss=0.2541, pruned_loss=0.03712, over 3368157.23 frames. ], batch size: 46, lr: 5.42e-03, grad_scale: 8.0 2023-03-09 13:46:32,312 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=73250.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:46:53,630 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.71 vs. limit=5.0 2023-03-09 13:47:03,606 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.734e+02 2.558e+02 3.003e+02 3.637e+02 1.061e+03, threshold=6.005e+02, percent-clipped=3.0 2023-03-09 13:47:08,048 INFO [train.py:898] (3/4) Epoch 21, batch 600, loss[loss=0.1626, simple_loss=0.2531, pruned_loss=0.03607, over 18278.00 frames. ], tot_loss[loss=0.1643, simple_loss=0.2543, pruned_loss=0.03715, over 3422604.04 frames. ], batch size: 49, lr: 5.42e-03, grad_scale: 8.0 2023-03-09 13:47:22,533 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=73293.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:47:53,607 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=73320.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 13:48:06,376 INFO [train.py:898] (3/4) Epoch 21, batch 650, loss[loss=0.1652, simple_loss=0.2556, pruned_loss=0.0374, over 17924.00 frames. ], tot_loss[loss=0.164, simple_loss=0.254, pruned_loss=0.03697, over 3456293.30 frames. ], batch size: 65, lr: 5.41e-03, grad_scale: 8.0 2023-03-09 13:48:33,540 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=73354.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:49:00,063 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.798e+02 2.707e+02 3.166e+02 4.000e+02 8.066e+02, threshold=6.331e+02, percent-clipped=2.0 2023-03-09 13:49:04,586 INFO [train.py:898] (3/4) Epoch 21, batch 700, loss[loss=0.1863, simple_loss=0.2751, pruned_loss=0.04875, over 18125.00 frames. ], tot_loss[loss=0.1643, simple_loss=0.2545, pruned_loss=0.03706, over 3489837.18 frames. ], batch size: 62, lr: 5.41e-03, grad_scale: 8.0 2023-03-09 13:49:43,889 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5371, 2.1292, 2.4006, 2.4570, 2.8793, 4.6614, 4.5006, 3.4110], device='cuda:3'), covar=tensor([0.1924, 0.2718, 0.3255, 0.2030, 0.2821, 0.0235, 0.0402, 0.0826], device='cuda:3'), in_proj_covar=tensor([0.0299, 0.0347, 0.0382, 0.0277, 0.0388, 0.0240, 0.0296, 0.0255], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:3') 2023-03-09 13:50:02,188 INFO [train.py:898] (3/4) Epoch 21, batch 750, loss[loss=0.1641, simple_loss=0.2595, pruned_loss=0.0343, over 18484.00 frames. ], tot_loss[loss=0.1643, simple_loss=0.2543, pruned_loss=0.03718, over 3520639.51 frames. ], batch size: 59, lr: 5.41e-03, grad_scale: 8.0 2023-03-09 13:50:09,088 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6064, 2.8194, 2.6109, 2.9613, 3.6423, 3.5847, 3.1672, 2.9666], device='cuda:3'), covar=tensor([0.0172, 0.0309, 0.0536, 0.0365, 0.0213, 0.0145, 0.0356, 0.0419], device='cuda:3'), in_proj_covar=tensor([0.0140, 0.0132, 0.0161, 0.0154, 0.0128, 0.0115, 0.0152, 0.0153], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 13:50:15,160 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7709, 4.5257, 4.6533, 3.4885, 3.7743, 3.4133, 2.4995, 2.4729], device='cuda:3'), covar=tensor([0.0227, 0.0152, 0.0070, 0.0300, 0.0307, 0.0243, 0.0781, 0.0820], device='cuda:3'), in_proj_covar=tensor([0.0070, 0.0059, 0.0062, 0.0068, 0.0089, 0.0066, 0.0077, 0.0084], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 13:50:54,402 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.921e+02 2.615e+02 3.165e+02 3.785e+02 6.213e+02, threshold=6.329e+02, percent-clipped=0.0 2023-03-09 13:50:59,650 INFO [train.py:898] (3/4) Epoch 21, batch 800, loss[loss=0.1474, simple_loss=0.2286, pruned_loss=0.0331, over 18414.00 frames. ], tot_loss[loss=0.1642, simple_loss=0.2539, pruned_loss=0.03728, over 3515173.83 frames. ], batch size: 42, lr: 5.41e-03, grad_scale: 8.0 2023-03-09 13:51:01,404 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.73 vs. limit=2.0 2023-03-09 13:51:43,764 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.23 vs. limit=5.0 2023-03-09 13:51:56,993 INFO [train.py:898] (3/4) Epoch 21, batch 850, loss[loss=0.1437, simple_loss=0.2322, pruned_loss=0.02761, over 18256.00 frames. ], tot_loss[loss=0.1634, simple_loss=0.253, pruned_loss=0.03694, over 3522485.68 frames. ], batch size: 47, lr: 5.41e-03, grad_scale: 8.0 2023-03-09 13:52:20,217 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=73550.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:52:49,680 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-09 13:52:50,042 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.633e+02 2.526e+02 3.131e+02 3.569e+02 6.821e+02, threshold=6.262e+02, percent-clipped=1.0 2023-03-09 13:52:54,620 INFO [train.py:898] (3/4) Epoch 21, batch 900, loss[loss=0.1603, simple_loss=0.2518, pruned_loss=0.03443, over 16013.00 frames. ], tot_loss[loss=0.1632, simple_loss=0.2529, pruned_loss=0.03676, over 3544617.45 frames. ], batch size: 94, lr: 5.41e-03, grad_scale: 8.0 2023-03-09 13:53:06,113 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=73590.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:53:15,968 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=73598.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:53:32,104 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4635, 2.7394, 4.1402, 3.5674, 2.5269, 4.4584, 3.8338, 2.7023], device='cuda:3'), covar=tensor([0.0494, 0.1520, 0.0288, 0.0417, 0.1616, 0.0202, 0.0508, 0.1086], device='cuda:3'), in_proj_covar=tensor([0.0209, 0.0239, 0.0211, 0.0162, 0.0224, 0.0213, 0.0246, 0.0199], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 13:53:40,773 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=73620.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 13:53:53,023 INFO [train.py:898] (3/4) Epoch 21, batch 950, loss[loss=0.1654, simple_loss=0.2581, pruned_loss=0.03634, over 18641.00 frames. ], tot_loss[loss=0.1633, simple_loss=0.2531, pruned_loss=0.03678, over 3557412.37 frames. ], batch size: 52, lr: 5.40e-03, grad_scale: 8.0 2023-03-09 13:54:15,551 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=73649.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:54:18,583 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=73651.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:54:37,482 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=73668.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 13:54:47,215 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.873e+02 2.598e+02 2.980e+02 3.796e+02 7.725e+02, threshold=5.960e+02, percent-clipped=2.0 2023-03-09 13:54:51,743 INFO [train.py:898] (3/4) Epoch 21, batch 1000, loss[loss=0.166, simple_loss=0.2558, pruned_loss=0.03815, over 18479.00 frames. ], tot_loss[loss=0.1631, simple_loss=0.2527, pruned_loss=0.03675, over 3571503.88 frames. ], batch size: 53, lr: 5.40e-03, grad_scale: 8.0 2023-03-09 13:55:49,542 INFO [train.py:898] (3/4) Epoch 21, batch 1050, loss[loss=0.1661, simple_loss=0.2601, pruned_loss=0.03605, over 18575.00 frames. ], tot_loss[loss=0.1632, simple_loss=0.2529, pruned_loss=0.03679, over 3581206.16 frames. ], batch size: 54, lr: 5.40e-03, grad_scale: 8.0 2023-03-09 13:55:50,919 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=73732.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:56:43,398 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.657e+02 2.593e+02 3.039e+02 3.732e+02 7.995e+02, threshold=6.078e+02, percent-clipped=2.0 2023-03-09 13:56:47,974 INFO [train.py:898] (3/4) Epoch 21, batch 1100, loss[loss=0.1788, simple_loss=0.2682, pruned_loss=0.04466, over 17071.00 frames. ], tot_loss[loss=0.1637, simple_loss=0.2532, pruned_loss=0.03713, over 3587740.74 frames. ], batch size: 78, lr: 5.40e-03, grad_scale: 8.0 2023-03-09 13:57:01,686 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=73793.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 13:57:46,778 INFO [train.py:898] (3/4) Epoch 21, batch 1150, loss[loss=0.1572, simple_loss=0.249, pruned_loss=0.03265, over 18507.00 frames. ], tot_loss[loss=0.1635, simple_loss=0.2529, pruned_loss=0.03704, over 3592338.29 frames. ], batch size: 51, lr: 5.40e-03, grad_scale: 8.0 2023-03-09 13:58:40,426 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.636e+02 2.572e+02 3.032e+02 3.666e+02 7.271e+02, threshold=6.063e+02, percent-clipped=3.0 2023-03-09 13:58:44,861 INFO [train.py:898] (3/4) Epoch 21, batch 1200, loss[loss=0.1595, simple_loss=0.2506, pruned_loss=0.03418, over 18390.00 frames. ], tot_loss[loss=0.1635, simple_loss=0.2528, pruned_loss=0.03708, over 3600796.73 frames. ], batch size: 50, lr: 5.39e-03, grad_scale: 8.0 2023-03-09 13:58:53,131 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.40 vs. limit=5.0 2023-03-09 13:59:00,891 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.67 vs. limit=2.0 2023-03-09 13:59:42,881 INFO [train.py:898] (3/4) Epoch 21, batch 1250, loss[loss=0.1744, simple_loss=0.263, pruned_loss=0.04292, over 18051.00 frames. ], tot_loss[loss=0.1639, simple_loss=0.2531, pruned_loss=0.03735, over 3592350.27 frames. ], batch size: 65, lr: 5.39e-03, grad_scale: 8.0 2023-03-09 14:00:00,229 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=73946.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:00:03,882 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=73949.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:00:22,602 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6639, 2.3178, 2.5998, 2.6733, 3.2163, 4.9228, 4.7750, 3.2859], device='cuda:3'), covar=tensor([0.1781, 0.2390, 0.2887, 0.1859, 0.2351, 0.0202, 0.0340, 0.0947], device='cuda:3'), in_proj_covar=tensor([0.0299, 0.0347, 0.0382, 0.0278, 0.0388, 0.0240, 0.0296, 0.0255], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:3') 2023-03-09 14:00:37,357 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.681e+02 2.538e+02 2.943e+02 3.615e+02 7.778e+02, threshold=5.886e+02, percent-clipped=1.0 2023-03-09 14:00:41,921 INFO [train.py:898] (3/4) Epoch 21, batch 1300, loss[loss=0.1796, simple_loss=0.2719, pruned_loss=0.0437, over 18298.00 frames. ], tot_loss[loss=0.1636, simple_loss=0.2529, pruned_loss=0.0372, over 3595948.06 frames. ], batch size: 57, lr: 5.39e-03, grad_scale: 8.0 2023-03-09 14:01:00,287 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=73997.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:01:10,497 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.47 vs. limit=2.0 2023-03-09 14:01:45,729 INFO [train.py:898] (3/4) Epoch 21, batch 1350, loss[loss=0.168, simple_loss=0.2597, pruned_loss=0.03812, over 18097.00 frames. ], tot_loss[loss=0.1644, simple_loss=0.2538, pruned_loss=0.03746, over 3585561.89 frames. ], batch size: 62, lr: 5.39e-03, grad_scale: 8.0 2023-03-09 14:02:19,908 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0111, 3.6916, 5.0591, 2.9097, 4.3268, 2.5856, 3.0670, 1.7011], device='cuda:3'), covar=tensor([0.1102, 0.0917, 0.0135, 0.0960, 0.0558, 0.2663, 0.2710, 0.2272], device='cuda:3'), in_proj_covar=tensor([0.0221, 0.0246, 0.0189, 0.0198, 0.0260, 0.0272, 0.0323, 0.0235], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 14:02:23,057 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=74064.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:02:39,207 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.953e+02 2.917e+02 3.282e+02 4.135e+02 8.979e+02, threshold=6.564e+02, percent-clipped=10.0 2023-03-09 14:02:43,763 INFO [train.py:898] (3/4) Epoch 21, batch 1400, loss[loss=0.165, simple_loss=0.2571, pruned_loss=0.03649, over 18483.00 frames. ], tot_loss[loss=0.1646, simple_loss=0.2538, pruned_loss=0.03767, over 3594608.76 frames. ], batch size: 51, lr: 5.39e-03, grad_scale: 8.0 2023-03-09 14:02:51,630 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=74088.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:03:35,145 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=74125.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:03:41,421 INFO [train.py:898] (3/4) Epoch 21, batch 1450, loss[loss=0.1518, simple_loss=0.2353, pruned_loss=0.03411, over 18375.00 frames. ], tot_loss[loss=0.1648, simple_loss=0.254, pruned_loss=0.03781, over 3593789.90 frames. ], batch size: 42, lr: 5.39e-03, grad_scale: 8.0 2023-03-09 14:04:35,773 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.991e+02 2.519e+02 2.943e+02 3.724e+02 1.372e+03, threshold=5.886e+02, percent-clipped=4.0 2023-03-09 14:04:40,827 INFO [train.py:898] (3/4) Epoch 21, batch 1500, loss[loss=0.1797, simple_loss=0.2718, pruned_loss=0.04382, over 18460.00 frames. ], tot_loss[loss=0.1646, simple_loss=0.2539, pruned_loss=0.03759, over 3592281.06 frames. ], batch size: 59, lr: 5.38e-03, grad_scale: 8.0 2023-03-09 14:05:06,312 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8042, 5.1693, 2.7884, 4.9304, 4.9267, 5.1061, 4.9133, 2.6686], device='cuda:3'), covar=tensor([0.0240, 0.0060, 0.0781, 0.0086, 0.0064, 0.0080, 0.0099, 0.1026], device='cuda:3'), in_proj_covar=tensor([0.0088, 0.0081, 0.0095, 0.0095, 0.0084, 0.0075, 0.0084, 0.0098], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 14:05:39,325 INFO [train.py:898] (3/4) Epoch 21, batch 1550, loss[loss=0.1412, simple_loss=0.236, pruned_loss=0.02318, over 18362.00 frames. ], tot_loss[loss=0.1639, simple_loss=0.2535, pruned_loss=0.03719, over 3590644.13 frames. ], batch size: 50, lr: 5.38e-03, grad_scale: 8.0 2023-03-09 14:05:50,249 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=74240.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:05:53,588 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4522, 5.9741, 5.5577, 5.7488, 5.5480, 5.4135, 6.0542, 5.9877], device='cuda:3'), covar=tensor([0.1091, 0.0722, 0.0455, 0.0767, 0.1364, 0.0676, 0.0550, 0.0681], device='cuda:3'), in_proj_covar=tensor([0.0609, 0.0524, 0.0379, 0.0547, 0.0737, 0.0542, 0.0739, 0.0566], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 14:05:57,042 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=74246.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:05:59,361 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8263, 4.7439, 4.7698, 3.6322, 3.9604, 3.6660, 2.7393, 2.6690], device='cuda:3'), covar=tensor([0.0202, 0.0110, 0.0059, 0.0237, 0.0275, 0.0192, 0.0660, 0.0766], device='cuda:3'), in_proj_covar=tensor([0.0069, 0.0058, 0.0061, 0.0067, 0.0087, 0.0065, 0.0076, 0.0083], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 14:06:00,378 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9708, 4.9595, 4.6411, 4.8745, 4.8808, 4.2709, 4.7955, 4.6153], device='cuda:3'), covar=tensor([0.0502, 0.0541, 0.1372, 0.0785, 0.0608, 0.0519, 0.0490, 0.1057], device='cuda:3'), in_proj_covar=tensor([0.0502, 0.0565, 0.0706, 0.0436, 0.0451, 0.0511, 0.0546, 0.0688], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 14:06:32,524 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.761e+02 2.609e+02 3.003e+02 3.447e+02 5.872e+02, threshold=6.005e+02, percent-clipped=0.0 2023-03-09 14:06:37,171 INFO [train.py:898] (3/4) Epoch 21, batch 1600, loss[loss=0.177, simple_loss=0.2698, pruned_loss=0.04209, over 16251.00 frames. ], tot_loss[loss=0.1649, simple_loss=0.2546, pruned_loss=0.03758, over 3590836.56 frames. ], batch size: 95, lr: 5.38e-03, grad_scale: 8.0 2023-03-09 14:06:46,922 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2480, 5.2750, 5.4833, 5.5415, 5.2098, 6.0729, 5.6669, 5.3780], device='cuda:3'), covar=tensor([0.1126, 0.0664, 0.0740, 0.0724, 0.1406, 0.0723, 0.0823, 0.1578], device='cuda:3'), in_proj_covar=tensor([0.0360, 0.0288, 0.0314, 0.0313, 0.0331, 0.0423, 0.0282, 0.0419], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 14:06:49,327 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5526, 3.0366, 4.2829, 3.8065, 2.8900, 4.6538, 4.0118, 3.0627], device='cuda:3'), covar=tensor([0.0584, 0.1364, 0.0328, 0.0422, 0.1411, 0.0204, 0.0551, 0.0915], device='cuda:3'), in_proj_covar=tensor([0.0205, 0.0233, 0.0206, 0.0158, 0.0218, 0.0208, 0.0241, 0.0195], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 14:06:53,630 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=74294.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:07:01,857 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=74301.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:07:35,710 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3682, 2.5808, 2.5649, 2.7496, 3.4637, 3.2784, 2.9666, 2.7811], device='cuda:3'), covar=tensor([0.0192, 0.0338, 0.0500, 0.0430, 0.0216, 0.0175, 0.0441, 0.0388], device='cuda:3'), in_proj_covar=tensor([0.0140, 0.0133, 0.0162, 0.0156, 0.0128, 0.0116, 0.0153, 0.0154], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 14:07:36,396 INFO [train.py:898] (3/4) Epoch 21, batch 1650, loss[loss=0.1891, simple_loss=0.2736, pruned_loss=0.05232, over 17985.00 frames. ], tot_loss[loss=0.1645, simple_loss=0.2542, pruned_loss=0.03742, over 3596338.64 frames. ], batch size: 65, lr: 5.38e-03, grad_scale: 8.0 2023-03-09 14:08:30,596 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.874e+02 2.715e+02 3.292e+02 3.947e+02 6.741e+02, threshold=6.584e+02, percent-clipped=1.0 2023-03-09 14:08:35,088 INFO [train.py:898] (3/4) Epoch 21, batch 1700, loss[loss=0.1899, simple_loss=0.2784, pruned_loss=0.05066, over 18345.00 frames. ], tot_loss[loss=0.1643, simple_loss=0.2542, pruned_loss=0.03725, over 3592832.83 frames. ], batch size: 55, lr: 5.38e-03, grad_scale: 8.0 2023-03-09 14:08:44,111 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=74388.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:08:59,180 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0669, 5.5456, 5.5466, 5.5536, 5.0843, 5.4931, 4.8850, 5.4486], device='cuda:3'), covar=tensor([0.0208, 0.0225, 0.0145, 0.0359, 0.0321, 0.0184, 0.0940, 0.0245], device='cuda:3'), in_proj_covar=tensor([0.0216, 0.0261, 0.0252, 0.0331, 0.0268, 0.0270, 0.0310, 0.0259], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 14:09:20,334 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=74420.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:09:33,171 INFO [train.py:898] (3/4) Epoch 21, batch 1750, loss[loss=0.1621, simple_loss=0.2619, pruned_loss=0.03116, over 18617.00 frames. ], tot_loss[loss=0.1645, simple_loss=0.2547, pruned_loss=0.03712, over 3591177.11 frames. ], batch size: 52, lr: 5.37e-03, grad_scale: 8.0 2023-03-09 14:09:38,768 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=74436.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:10:18,417 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8047, 5.2431, 2.6479, 5.0629, 4.9660, 5.2149, 4.9615, 2.4525], device='cuda:3'), covar=tensor([0.0235, 0.0054, 0.0788, 0.0070, 0.0066, 0.0059, 0.0094, 0.1091], device='cuda:3'), in_proj_covar=tensor([0.0088, 0.0080, 0.0095, 0.0095, 0.0084, 0.0075, 0.0084, 0.0097], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 14:10:25,798 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.991e+02 2.679e+02 3.149e+02 3.785e+02 6.522e+02, threshold=6.298e+02, percent-clipped=0.0 2023-03-09 14:10:30,672 INFO [train.py:898] (3/4) Epoch 21, batch 1800, loss[loss=0.1602, simple_loss=0.2576, pruned_loss=0.03136, over 18235.00 frames. ], tot_loss[loss=0.1646, simple_loss=0.2546, pruned_loss=0.03728, over 3582986.19 frames. ], batch size: 60, lr: 5.37e-03, grad_scale: 8.0 2023-03-09 14:11:11,724 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.59 vs. limit=2.0 2023-03-09 14:11:28,024 INFO [train.py:898] (3/4) Epoch 21, batch 1850, loss[loss=0.1678, simple_loss=0.255, pruned_loss=0.04034, over 16215.00 frames. ], tot_loss[loss=0.1637, simple_loss=0.2535, pruned_loss=0.03701, over 3583872.13 frames. ], batch size: 94, lr: 5.37e-03, grad_scale: 8.0 2023-03-09 14:12:12,530 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.71 vs. limit=2.0 2023-03-09 14:12:21,972 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.780e+02 2.538e+02 3.055e+02 3.545e+02 5.986e+02, threshold=6.111e+02, percent-clipped=0.0 2023-03-09 14:12:26,491 INFO [train.py:898] (3/4) Epoch 21, batch 1900, loss[loss=0.1471, simple_loss=0.2396, pruned_loss=0.02735, over 18500.00 frames. ], tot_loss[loss=0.1638, simple_loss=0.2536, pruned_loss=0.03704, over 3587344.97 frames. ], batch size: 51, lr: 5.37e-03, grad_scale: 8.0 2023-03-09 14:12:43,286 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=74595.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:12:44,282 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=74596.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:13:24,951 INFO [train.py:898] (3/4) Epoch 21, batch 1950, loss[loss=0.1534, simple_loss=0.2449, pruned_loss=0.03095, over 18427.00 frames. ], tot_loss[loss=0.1635, simple_loss=0.2535, pruned_loss=0.03677, over 3588204.60 frames. ], batch size: 48, lr: 5.37e-03, grad_scale: 16.0 2023-03-09 14:13:40,589 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7211, 3.7002, 3.5826, 3.2424, 3.4866, 2.9923, 2.9045, 3.7749], device='cuda:3'), covar=tensor([0.0062, 0.0085, 0.0075, 0.0131, 0.0098, 0.0169, 0.0196, 0.0070], device='cuda:3'), in_proj_covar=tensor([0.0140, 0.0158, 0.0135, 0.0186, 0.0142, 0.0178, 0.0184, 0.0123], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 14:13:56,035 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=74656.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:14:19,407 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.749e+02 2.532e+02 2.978e+02 3.558e+02 7.453e+02, threshold=5.957e+02, percent-clipped=1.0 2023-03-09 14:14:23,892 INFO [train.py:898] (3/4) Epoch 21, batch 2000, loss[loss=0.1732, simple_loss=0.2679, pruned_loss=0.03924, over 18380.00 frames. ], tot_loss[loss=0.1627, simple_loss=0.2528, pruned_loss=0.03632, over 3596822.44 frames. ], batch size: 56, lr: 5.37e-03, grad_scale: 16.0 2023-03-09 14:14:38,313 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5738, 3.2843, 4.3186, 3.8641, 3.2182, 2.9576, 3.9458, 4.4717], device='cuda:3'), covar=tensor([0.0819, 0.1329, 0.0250, 0.0439, 0.0858, 0.1134, 0.0435, 0.0333], device='cuda:3'), in_proj_covar=tensor([0.0148, 0.0275, 0.0151, 0.0181, 0.0191, 0.0191, 0.0195, 0.0197], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 14:15:09,471 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=74720.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:15:21,725 INFO [train.py:898] (3/4) Epoch 21, batch 2050, loss[loss=0.1677, simple_loss=0.2621, pruned_loss=0.03665, over 18482.00 frames. ], tot_loss[loss=0.163, simple_loss=0.2531, pruned_loss=0.03648, over 3588354.29 frames. ], batch size: 53, lr: 5.36e-03, grad_scale: 16.0 2023-03-09 14:16:05,343 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=74768.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:16:15,325 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.051e+02 2.659e+02 3.081e+02 3.760e+02 7.989e+02, threshold=6.163e+02, percent-clipped=4.0 2023-03-09 14:16:19,623 INFO [train.py:898] (3/4) Epoch 21, batch 2100, loss[loss=0.1457, simple_loss=0.2373, pruned_loss=0.02702, over 18246.00 frames. ], tot_loss[loss=0.1633, simple_loss=0.2532, pruned_loss=0.03674, over 3597640.93 frames. ], batch size: 47, lr: 5.36e-03, grad_scale: 16.0 2023-03-09 14:16:55,788 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4454, 6.0080, 5.5089, 5.7518, 5.5211, 5.3939, 6.0437, 5.9811], device='cuda:3'), covar=tensor([0.1187, 0.0756, 0.0492, 0.0766, 0.1367, 0.0727, 0.0586, 0.0745], device='cuda:3'), in_proj_covar=tensor([0.0615, 0.0532, 0.0385, 0.0554, 0.0747, 0.0552, 0.0750, 0.0572], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:3') 2023-03-09 14:17:17,969 INFO [train.py:898] (3/4) Epoch 21, batch 2150, loss[loss=0.1642, simple_loss=0.2507, pruned_loss=0.03878, over 18274.00 frames. ], tot_loss[loss=0.1638, simple_loss=0.2537, pruned_loss=0.03693, over 3596621.66 frames. ], batch size: 47, lr: 5.36e-03, grad_scale: 16.0 2023-03-09 14:17:43,718 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-09 14:18:11,058 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.902e+02 2.637e+02 3.175e+02 3.840e+02 7.861e+02, threshold=6.351e+02, percent-clipped=2.0 2023-03-09 14:18:15,485 INFO [train.py:898] (3/4) Epoch 21, batch 2200, loss[loss=0.1626, simple_loss=0.2556, pruned_loss=0.03478, over 18612.00 frames. ], tot_loss[loss=0.1641, simple_loss=0.2539, pruned_loss=0.03717, over 3596550.16 frames. ], batch size: 52, lr: 5.36e-03, grad_scale: 16.0 2023-03-09 14:18:32,698 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=74896.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:18:47,854 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=74909.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:19:13,477 INFO [train.py:898] (3/4) Epoch 21, batch 2250, loss[loss=0.1807, simple_loss=0.2665, pruned_loss=0.04743, over 18494.00 frames. ], tot_loss[loss=0.1643, simple_loss=0.2536, pruned_loss=0.03748, over 3586570.75 frames. ], batch size: 53, lr: 5.36e-03, grad_scale: 16.0 2023-03-09 14:19:28,489 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=74944.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:19:36,427 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=74951.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:19:36,679 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8598, 4.1510, 2.3798, 4.2179, 5.1867, 2.6063, 3.8169, 3.9619], device='cuda:3'), covar=tensor([0.0199, 0.1090, 0.1672, 0.0518, 0.0099, 0.1181, 0.0643, 0.0681], device='cuda:3'), in_proj_covar=tensor([0.0167, 0.0270, 0.0204, 0.0194, 0.0126, 0.0181, 0.0215, 0.0224], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 14:19:51,904 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=74964.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:19:52,395 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.63 vs. limit=5.0 2023-03-09 14:19:59,290 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=74970.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:20:02,967 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8557, 5.3580, 5.3131, 5.3248, 4.8886, 5.2644, 4.6315, 5.2412], device='cuda:3'), covar=tensor([0.0235, 0.0263, 0.0203, 0.0399, 0.0360, 0.0231, 0.1133, 0.0307], device='cuda:3'), in_proj_covar=tensor([0.0218, 0.0262, 0.0255, 0.0331, 0.0271, 0.0270, 0.0311, 0.0261], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 14:20:08,788 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.078e+02 2.763e+02 3.199e+02 3.835e+02 8.109e+02, threshold=6.398e+02, percent-clipped=3.0 2023-03-09 14:20:12,093 INFO [train.py:898] (3/4) Epoch 21, batch 2300, loss[loss=0.1425, simple_loss=0.2351, pruned_loss=0.02499, over 18259.00 frames. ], tot_loss[loss=0.1636, simple_loss=0.2531, pruned_loss=0.03711, over 3596529.25 frames. ], batch size: 45, lr: 5.35e-03, grad_scale: 8.0 2023-03-09 14:20:22,880 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.86 vs. limit=2.0 2023-03-09 14:20:45,251 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.71 vs. limit=2.0 2023-03-09 14:20:45,255 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.81 vs. limit=2.0 2023-03-09 14:20:48,260 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9344, 5.3651, 2.6873, 5.2118, 5.1217, 5.3793, 5.1061, 2.6322], device='cuda:3'), covar=tensor([0.0210, 0.0060, 0.0787, 0.0073, 0.0059, 0.0064, 0.0089, 0.1026], device='cuda:3'), in_proj_covar=tensor([0.0088, 0.0081, 0.0095, 0.0095, 0.0084, 0.0075, 0.0084, 0.0097], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 14:21:03,828 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=75025.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:21:10,835 INFO [train.py:898] (3/4) Epoch 21, batch 2350, loss[loss=0.1679, simple_loss=0.2549, pruned_loss=0.04041, over 18350.00 frames. ], tot_loss[loss=0.1638, simple_loss=0.2534, pruned_loss=0.03713, over 3592567.60 frames. ], batch size: 56, lr: 5.35e-03, grad_scale: 8.0 2023-03-09 14:21:37,117 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-09 14:22:04,859 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.756e+02 2.674e+02 3.137e+02 3.722e+02 6.361e+02, threshold=6.274e+02, percent-clipped=0.0 2023-03-09 14:22:05,343 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9923, 4.2106, 2.6850, 4.0906, 5.3305, 2.7839, 3.8338, 4.1205], device='cuda:3'), covar=tensor([0.0175, 0.1149, 0.1463, 0.0621, 0.0070, 0.1107, 0.0649, 0.0651], device='cuda:3'), in_proj_covar=tensor([0.0167, 0.0271, 0.0205, 0.0196, 0.0127, 0.0183, 0.0216, 0.0224], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 14:22:08,962 INFO [train.py:898] (3/4) Epoch 21, batch 2400, loss[loss=0.1726, simple_loss=0.2702, pruned_loss=0.03749, over 17931.00 frames. ], tot_loss[loss=0.1634, simple_loss=0.2526, pruned_loss=0.03704, over 3580959.15 frames. ], batch size: 65, lr: 5.35e-03, grad_scale: 8.0 2023-03-09 14:22:34,019 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4781, 3.4468, 3.3301, 3.0455, 3.2820, 2.7450, 2.7095, 3.5319], device='cuda:3'), covar=tensor([0.0071, 0.0090, 0.0078, 0.0153, 0.0096, 0.0182, 0.0201, 0.0067], device='cuda:3'), in_proj_covar=tensor([0.0140, 0.0159, 0.0136, 0.0185, 0.0142, 0.0178, 0.0183, 0.0122], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 14:23:06,748 INFO [train.py:898] (3/4) Epoch 21, batch 2450, loss[loss=0.1491, simple_loss=0.2376, pruned_loss=0.03031, over 18253.00 frames. ], tot_loss[loss=0.1627, simple_loss=0.2521, pruned_loss=0.03668, over 3588591.46 frames. ], batch size: 47, lr: 5.35e-03, grad_scale: 8.0 2023-03-09 14:23:50,293 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5569, 5.5212, 5.1691, 5.4855, 5.4917, 4.8793, 5.3802, 5.1303], device='cuda:3'), covar=tensor([0.0369, 0.0380, 0.1167, 0.0678, 0.0482, 0.0390, 0.0397, 0.0966], device='cuda:3'), in_proj_covar=tensor([0.0502, 0.0557, 0.0703, 0.0436, 0.0448, 0.0507, 0.0542, 0.0684], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 14:24:00,721 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.788e+02 2.611e+02 3.062e+02 3.854e+02 9.257e+02, threshold=6.124e+02, percent-clipped=4.0 2023-03-09 14:24:03,993 INFO [train.py:898] (3/4) Epoch 21, batch 2500, loss[loss=0.1877, simple_loss=0.2684, pruned_loss=0.05344, over 12324.00 frames. ], tot_loss[loss=0.1636, simple_loss=0.2528, pruned_loss=0.0372, over 3580100.52 frames. ], batch size: 130, lr: 5.35e-03, grad_scale: 8.0 2023-03-09 14:24:22,953 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7575, 3.1925, 3.9842, 2.7127, 3.6899, 2.5671, 2.6730, 2.2532], device='cuda:3'), covar=tensor([0.1057, 0.0977, 0.0296, 0.0793, 0.0663, 0.2327, 0.2233, 0.1749], device='cuda:3'), in_proj_covar=tensor([0.0218, 0.0244, 0.0189, 0.0195, 0.0256, 0.0269, 0.0319, 0.0232], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 14:24:59,648 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=75228.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:25:02,714 INFO [train.py:898] (3/4) Epoch 21, batch 2550, loss[loss=0.1737, simple_loss=0.2661, pruned_loss=0.04063, over 18396.00 frames. ], tot_loss[loss=0.1635, simple_loss=0.2527, pruned_loss=0.03716, over 3574114.44 frames. ], batch size: 52, lr: 5.35e-03, grad_scale: 8.0 2023-03-09 14:25:26,301 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=75251.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:25:42,179 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=75265.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:25:48,461 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=75270.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:25:57,524 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.566e+02 2.536e+02 3.166e+02 3.977e+02 7.777e+02, threshold=6.331e+02, percent-clipped=2.0 2023-03-09 14:26:01,091 INFO [train.py:898] (3/4) Epoch 21, batch 2600, loss[loss=0.157, simple_loss=0.2453, pruned_loss=0.03433, over 18355.00 frames. ], tot_loss[loss=0.1636, simple_loss=0.2532, pruned_loss=0.03699, over 3578291.19 frames. ], batch size: 50, lr: 5.34e-03, grad_scale: 8.0 2023-03-09 14:26:11,232 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=75289.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:26:22,512 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=75299.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:26:46,546 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=75320.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:26:59,266 INFO [train.py:898] (3/4) Epoch 21, batch 2650, loss[loss=0.1673, simple_loss=0.2664, pruned_loss=0.03413, over 18472.00 frames. ], tot_loss[loss=0.1631, simple_loss=0.2528, pruned_loss=0.03672, over 3578501.67 frames. ], batch size: 53, lr: 5.34e-03, grad_scale: 8.0 2023-03-09 14:26:59,715 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=75331.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 14:27:53,152 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.804e+02 2.756e+02 3.274e+02 3.890e+02 7.803e+02, threshold=6.547e+02, percent-clipped=3.0 2023-03-09 14:27:57,176 INFO [train.py:898] (3/4) Epoch 21, batch 2700, loss[loss=0.1403, simple_loss=0.2264, pruned_loss=0.02714, over 18245.00 frames. ], tot_loss[loss=0.1633, simple_loss=0.2529, pruned_loss=0.03678, over 3579909.96 frames. ], batch size: 45, lr: 5.34e-03, grad_scale: 8.0 2023-03-09 14:28:55,189 INFO [train.py:898] (3/4) Epoch 21, batch 2750, loss[loss=0.1489, simple_loss=0.2346, pruned_loss=0.03156, over 17647.00 frames. ], tot_loss[loss=0.163, simple_loss=0.2527, pruned_loss=0.03662, over 3586885.56 frames. ], batch size: 39, lr: 5.34e-03, grad_scale: 4.0 2023-03-09 14:29:31,309 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1711, 5.1402, 4.7931, 5.0727, 5.0833, 4.4659, 4.9668, 4.7783], device='cuda:3'), covar=tensor([0.0454, 0.0476, 0.1302, 0.0827, 0.0580, 0.0486, 0.0485, 0.1051], device='cuda:3'), in_proj_covar=tensor([0.0504, 0.0561, 0.0711, 0.0443, 0.0454, 0.0512, 0.0548, 0.0687], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 14:29:50,241 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.664e+02 2.526e+02 3.249e+02 3.895e+02 6.224e+02, threshold=6.498e+02, percent-clipped=0.0 2023-03-09 14:29:52,461 INFO [train.py:898] (3/4) Epoch 21, batch 2800, loss[loss=0.1547, simple_loss=0.2369, pruned_loss=0.0362, over 18227.00 frames. ], tot_loss[loss=0.163, simple_loss=0.2528, pruned_loss=0.03663, over 3598137.47 frames. ], batch size: 45, lr: 5.34e-03, grad_scale: 8.0 2023-03-09 14:30:04,101 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-09 14:30:50,508 INFO [train.py:898] (3/4) Epoch 21, batch 2850, loss[loss=0.1912, simple_loss=0.2794, pruned_loss=0.05153, over 18297.00 frames. ], tot_loss[loss=0.1634, simple_loss=0.2532, pruned_loss=0.03677, over 3581029.18 frames. ], batch size: 57, lr: 5.34e-03, grad_scale: 8.0 2023-03-09 14:31:06,279 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7912, 4.7564, 2.5808, 4.5673, 4.4657, 4.7271, 4.5273, 2.7457], device='cuda:3'), covar=tensor([0.0218, 0.0057, 0.0787, 0.0108, 0.0081, 0.0068, 0.0094, 0.0894], device='cuda:3'), in_proj_covar=tensor([0.0088, 0.0080, 0.0095, 0.0095, 0.0084, 0.0075, 0.0084, 0.0097], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 14:31:30,960 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=75565.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:31:46,371 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.918e+02 2.851e+02 3.414e+02 4.346e+02 8.993e+02, threshold=6.828e+02, percent-clipped=4.0 2023-03-09 14:31:48,614 INFO [train.py:898] (3/4) Epoch 21, batch 2900, loss[loss=0.1805, simple_loss=0.2714, pruned_loss=0.04477, over 18059.00 frames. ], tot_loss[loss=0.1639, simple_loss=0.2536, pruned_loss=0.03704, over 3580370.05 frames. ], batch size: 62, lr: 5.33e-03, grad_scale: 8.0 2023-03-09 14:31:52,604 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=75584.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:32:21,166 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5637, 5.4924, 5.1657, 5.5096, 5.4618, 4.8587, 5.3826, 5.1601], device='cuda:3'), covar=tensor([0.0434, 0.0443, 0.1333, 0.0708, 0.0580, 0.0447, 0.0447, 0.0970], device='cuda:3'), in_proj_covar=tensor([0.0503, 0.0561, 0.0710, 0.0440, 0.0453, 0.0510, 0.0544, 0.0685], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 14:32:27,074 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=75613.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:32:35,162 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=75620.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:32:41,989 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=75626.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 14:32:45,585 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1511, 5.6490, 2.8722, 5.4471, 5.3248, 5.6668, 5.4212, 2.8896], device='cuda:3'), covar=tensor([0.0190, 0.0056, 0.0758, 0.0060, 0.0068, 0.0052, 0.0073, 0.0952], device='cuda:3'), in_proj_covar=tensor([0.0089, 0.0081, 0.0096, 0.0096, 0.0085, 0.0076, 0.0085, 0.0098], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 14:32:47,493 INFO [train.py:898] (3/4) Epoch 21, batch 2950, loss[loss=0.1901, simple_loss=0.2742, pruned_loss=0.05296, over 18344.00 frames. ], tot_loss[loss=0.1643, simple_loss=0.2537, pruned_loss=0.03741, over 3575435.15 frames. ], batch size: 56, lr: 5.33e-03, grad_scale: 8.0 2023-03-09 14:33:28,280 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7825, 3.5211, 2.3680, 4.5379, 3.2101, 4.3730, 2.6846, 4.1001], device='cuda:3'), covar=tensor([0.0598, 0.0881, 0.1472, 0.0490, 0.0869, 0.0318, 0.1200, 0.0421], device='cuda:3'), in_proj_covar=tensor([0.0213, 0.0225, 0.0189, 0.0284, 0.0190, 0.0262, 0.0202, 0.0197], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 14:33:31,424 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=75668.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:33:38,421 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6076, 3.3982, 2.2334, 4.4056, 3.0620, 4.2381, 2.3868, 3.8572], device='cuda:3'), covar=tensor([0.0611, 0.0861, 0.1502, 0.0492, 0.0878, 0.0262, 0.1295, 0.0446], device='cuda:3'), in_proj_covar=tensor([0.0213, 0.0226, 0.0189, 0.0284, 0.0190, 0.0262, 0.0202, 0.0197], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 14:33:43,531 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.752e+02 2.513e+02 3.131e+02 3.690e+02 6.570e+02, threshold=6.263e+02, percent-clipped=0.0 2023-03-09 14:33:45,505 INFO [train.py:898] (3/4) Epoch 21, batch 3000, loss[loss=0.1622, simple_loss=0.2482, pruned_loss=0.0381, over 18283.00 frames. ], tot_loss[loss=0.1635, simple_loss=0.2532, pruned_loss=0.03684, over 3583724.55 frames. ], batch size: 49, lr: 5.33e-03, grad_scale: 8.0 2023-03-09 14:33:45,505 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 14:33:57,964 INFO [train.py:932] (3/4) Epoch 21, validation: loss=0.1498, simple_loss=0.2495, pruned_loss=0.02501, over 944034.00 frames. 2023-03-09 14:33:57,965 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 14:34:55,549 INFO [train.py:898] (3/4) Epoch 21, batch 3050, loss[loss=0.1707, simple_loss=0.2627, pruned_loss=0.03936, over 18345.00 frames. ], tot_loss[loss=0.1648, simple_loss=0.2544, pruned_loss=0.03758, over 3568061.36 frames. ], batch size: 55, lr: 5.33e-03, grad_scale: 8.0 2023-03-09 14:35:05,670 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7449, 3.6614, 5.1369, 4.4294, 3.4559, 3.0123, 4.5574, 5.3431], device='cuda:3'), covar=tensor([0.0765, 0.1638, 0.0174, 0.0383, 0.0844, 0.1174, 0.0352, 0.0327], device='cuda:3'), in_proj_covar=tensor([0.0147, 0.0273, 0.0152, 0.0181, 0.0190, 0.0190, 0.0194, 0.0196], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0002, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 14:35:51,757 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.562e+02 2.753e+02 3.195e+02 3.749e+02 6.792e+02, threshold=6.390e+02, percent-clipped=2.0 2023-03-09 14:35:54,447 INFO [train.py:898] (3/4) Epoch 21, batch 3100, loss[loss=0.1895, simple_loss=0.2824, pruned_loss=0.04836, over 18019.00 frames. ], tot_loss[loss=0.1648, simple_loss=0.2544, pruned_loss=0.03764, over 3567484.88 frames. ], batch size: 65, lr: 5.33e-03, grad_scale: 8.0 2023-03-09 14:35:58,983 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0196, 4.9875, 4.7155, 4.9106, 4.9451, 4.4358, 4.8737, 4.6533], device='cuda:3'), covar=tensor([0.0416, 0.0470, 0.1183, 0.0732, 0.0525, 0.0419, 0.0404, 0.1041], device='cuda:3'), in_proj_covar=tensor([0.0501, 0.0563, 0.0707, 0.0440, 0.0452, 0.0510, 0.0544, 0.0683], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 14:36:52,853 INFO [train.py:898] (3/4) Epoch 21, batch 3150, loss[loss=0.1593, simple_loss=0.2472, pruned_loss=0.03572, over 18295.00 frames. ], tot_loss[loss=0.1639, simple_loss=0.2535, pruned_loss=0.03711, over 3585488.06 frames. ], batch size: 49, lr: 5.32e-03, grad_scale: 8.0 2023-03-09 14:37:32,261 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.81 vs. limit=2.0 2023-03-09 14:37:49,364 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.108e+02 2.640e+02 3.076e+02 3.933e+02 6.282e+02, threshold=6.152e+02, percent-clipped=0.0 2023-03-09 14:37:51,673 INFO [train.py:898] (3/4) Epoch 21, batch 3200, loss[loss=0.1845, simple_loss=0.271, pruned_loss=0.04898, over 17855.00 frames. ], tot_loss[loss=0.1645, simple_loss=0.2542, pruned_loss=0.03739, over 3592283.53 frames. ], batch size: 70, lr: 5.32e-03, grad_scale: 8.0 2023-03-09 14:37:55,443 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=75884.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:37:59,634 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0873, 5.3753, 2.5141, 5.2373, 5.1050, 5.3613, 5.1905, 2.6985], device='cuda:3'), covar=tensor([0.0192, 0.0057, 0.0874, 0.0069, 0.0071, 0.0067, 0.0081, 0.1038], device='cuda:3'), in_proj_covar=tensor([0.0088, 0.0081, 0.0096, 0.0095, 0.0085, 0.0076, 0.0085, 0.0098], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 14:38:43,220 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6707, 2.3531, 2.6609, 2.6348, 3.1241, 4.7710, 4.7027, 3.4180], device='cuda:3'), covar=tensor([0.1699, 0.2353, 0.2773, 0.1841, 0.2412, 0.0237, 0.0381, 0.0918], device='cuda:3'), in_proj_covar=tensor([0.0301, 0.0346, 0.0383, 0.0277, 0.0389, 0.0242, 0.0296, 0.0256], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 14:38:45,289 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=75926.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 14:38:50,629 INFO [train.py:898] (3/4) Epoch 21, batch 3250, loss[loss=0.1533, simple_loss=0.243, pruned_loss=0.03179, over 18396.00 frames. ], tot_loss[loss=0.1637, simple_loss=0.2532, pruned_loss=0.03708, over 3606053.07 frames. ], batch size: 48, lr: 5.32e-03, grad_scale: 8.0 2023-03-09 14:38:52,003 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=75932.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:39:12,187 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9249, 5.2814, 2.3845, 5.1237, 4.9984, 5.2488, 5.0536, 2.6497], device='cuda:3'), covar=tensor([0.0226, 0.0057, 0.0917, 0.0078, 0.0074, 0.0071, 0.0096, 0.1067], device='cuda:3'), in_proj_covar=tensor([0.0089, 0.0081, 0.0096, 0.0096, 0.0086, 0.0076, 0.0086, 0.0098], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 14:39:41,556 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=75974.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:39:46,942 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.093e+02 2.622e+02 3.085e+02 3.781e+02 1.173e+03, threshold=6.171e+02, percent-clipped=7.0 2023-03-09 14:39:49,198 INFO [train.py:898] (3/4) Epoch 21, batch 3300, loss[loss=0.1846, simple_loss=0.2748, pruned_loss=0.04717, over 18291.00 frames. ], tot_loss[loss=0.1648, simple_loss=0.2546, pruned_loss=0.0375, over 3592504.88 frames. ], batch size: 57, lr: 5.32e-03, grad_scale: 8.0 2023-03-09 14:40:52,549 INFO [train.py:898] (3/4) Epoch 21, batch 3350, loss[loss=0.1489, simple_loss=0.2251, pruned_loss=0.0363, over 18463.00 frames. ], tot_loss[loss=0.1637, simple_loss=0.2534, pruned_loss=0.037, over 3600590.25 frames. ], batch size: 43, lr: 5.32e-03, grad_scale: 8.0 2023-03-09 14:41:26,063 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.63 vs. limit=5.0 2023-03-09 14:41:48,801 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.784e+02 2.702e+02 3.276e+02 4.045e+02 1.355e+03, threshold=6.552e+02, percent-clipped=6.0 2023-03-09 14:41:51,073 INFO [train.py:898] (3/4) Epoch 21, batch 3400, loss[loss=0.1795, simple_loss=0.2709, pruned_loss=0.04406, over 18352.00 frames. ], tot_loss[loss=0.1644, simple_loss=0.2541, pruned_loss=0.03729, over 3607745.58 frames. ], batch size: 56, lr: 5.32e-03, grad_scale: 8.0 2023-03-09 14:42:02,174 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.80 vs. limit=5.0 2023-03-09 14:42:06,273 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5177, 2.2660, 2.4785, 2.5778, 3.1138, 4.7788, 4.5807, 3.1027], device='cuda:3'), covar=tensor([0.1881, 0.2449, 0.3081, 0.1931, 0.2469, 0.0204, 0.0388, 0.1034], device='cuda:3'), in_proj_covar=tensor([0.0303, 0.0349, 0.0386, 0.0280, 0.0392, 0.0244, 0.0297, 0.0257], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:3') 2023-03-09 14:42:49,332 INFO [train.py:898] (3/4) Epoch 21, batch 3450, loss[loss=0.1732, simple_loss=0.2673, pruned_loss=0.03958, over 17069.00 frames. ], tot_loss[loss=0.1644, simple_loss=0.254, pruned_loss=0.03739, over 3597280.27 frames. ], batch size: 78, lr: 5.31e-03, grad_scale: 8.0 2023-03-09 14:43:45,619 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.955e+02 2.600e+02 3.134e+02 3.743e+02 8.321e+02, threshold=6.269e+02, percent-clipped=3.0 2023-03-09 14:43:47,844 INFO [train.py:898] (3/4) Epoch 21, batch 3500, loss[loss=0.1475, simple_loss=0.2288, pruned_loss=0.03306, over 17612.00 frames. ], tot_loss[loss=0.1648, simple_loss=0.2546, pruned_loss=0.03756, over 3584628.38 frames. ], batch size: 39, lr: 5.31e-03, grad_scale: 8.0 2023-03-09 14:44:39,774 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8433, 3.8681, 3.7435, 3.3583, 3.5928, 3.1063, 3.1060, 3.9352], device='cuda:3'), covar=tensor([0.0060, 0.0075, 0.0076, 0.0123, 0.0092, 0.0166, 0.0169, 0.0052], device='cuda:3'), in_proj_covar=tensor([0.0141, 0.0158, 0.0135, 0.0187, 0.0142, 0.0178, 0.0184, 0.0122], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 14:44:43,827 INFO [train.py:898] (3/4) Epoch 21, batch 3550, loss[loss=0.1578, simple_loss=0.2446, pruned_loss=0.03546, over 18523.00 frames. ], tot_loss[loss=0.1641, simple_loss=0.2537, pruned_loss=0.03721, over 3587500.29 frames. ], batch size: 49, lr: 5.31e-03, grad_scale: 4.0 2023-03-09 14:45:24,490 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=76269.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 14:45:36,127 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.098e+02 2.775e+02 3.080e+02 3.770e+02 8.534e+02, threshold=6.159e+02, percent-clipped=2.0 2023-03-09 14:45:37,245 INFO [train.py:898] (3/4) Epoch 21, batch 3600, loss[loss=0.1641, simple_loss=0.2522, pruned_loss=0.03805, over 18296.00 frames. ], tot_loss[loss=0.1633, simple_loss=0.2527, pruned_loss=0.03693, over 3586345.73 frames. ], batch size: 57, lr: 5.31e-03, grad_scale: 8.0 2023-03-09 14:45:51,624 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=76294.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:46:42,827 INFO [train.py:898] (3/4) Epoch 22, batch 0, loss[loss=0.1731, simple_loss=0.2721, pruned_loss=0.03702, over 18483.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2721, pruned_loss=0.03702, over 18483.00 frames. ], batch size: 53, lr: 5.18e-03, grad_scale: 8.0 2023-03-09 14:46:42,827 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 14:46:54,608 INFO [train.py:932] (3/4) Epoch 22, validation: loss=0.1504, simple_loss=0.25, pruned_loss=0.02541, over 944034.00 frames. 2023-03-09 14:46:54,609 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 14:47:04,800 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=76323.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 14:47:13,422 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=76330.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 14:47:42,838 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=76355.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:47:53,549 INFO [train.py:898] (3/4) Epoch 22, batch 50, loss[loss=0.1488, simple_loss=0.2364, pruned_loss=0.03058, over 18255.00 frames. ], tot_loss[loss=0.1613, simple_loss=0.2513, pruned_loss=0.03568, over 811759.04 frames. ], batch size: 45, lr: 5.18e-03, grad_scale: 8.0 2023-03-09 14:48:11,541 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.962e+02 2.572e+02 3.075e+02 3.753e+02 8.462e+02, threshold=6.150e+02, percent-clipped=5.0 2023-03-09 14:48:16,383 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=76384.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 14:48:52,275 INFO [train.py:898] (3/4) Epoch 22, batch 100, loss[loss=0.1681, simple_loss=0.2611, pruned_loss=0.03754, over 18500.00 frames. ], tot_loss[loss=0.1627, simple_loss=0.2531, pruned_loss=0.03611, over 1426851.61 frames. ], batch size: 51, lr: 5.18e-03, grad_scale: 8.0 2023-03-09 14:49:17,304 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5443, 2.2528, 2.3878, 2.6358, 2.9848, 4.9065, 4.7285, 3.2605], device='cuda:3'), covar=tensor([0.2210, 0.3147, 0.3831, 0.2139, 0.3371, 0.0258, 0.0414, 0.1104], device='cuda:3'), in_proj_covar=tensor([0.0301, 0.0347, 0.0384, 0.0280, 0.0390, 0.0244, 0.0297, 0.0255], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 14:49:49,352 INFO [train.py:898] (3/4) Epoch 22, batch 150, loss[loss=0.1354, simple_loss=0.2157, pruned_loss=0.02752, over 18457.00 frames. ], tot_loss[loss=0.1634, simple_loss=0.2528, pruned_loss=0.03699, over 1904376.13 frames. ], batch size: 43, lr: 5.18e-03, grad_scale: 8.0 2023-03-09 14:50:05,949 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.617e+02 2.647e+02 3.251e+02 3.870e+02 7.271e+02, threshold=6.503e+02, percent-clipped=3.0 2023-03-09 14:50:46,566 INFO [train.py:898] (3/4) Epoch 22, batch 200, loss[loss=0.1649, simple_loss=0.2579, pruned_loss=0.0359, over 18225.00 frames. ], tot_loss[loss=0.1633, simple_loss=0.2529, pruned_loss=0.03685, over 2261568.88 frames. ], batch size: 60, lr: 5.18e-03, grad_scale: 8.0 2023-03-09 14:51:45,328 INFO [train.py:898] (3/4) Epoch 22, batch 250, loss[loss=0.1465, simple_loss=0.231, pruned_loss=0.03099, over 18173.00 frames. ], tot_loss[loss=0.163, simple_loss=0.2529, pruned_loss=0.03658, over 2564514.00 frames. ], batch size: 44, lr: 5.18e-03, grad_scale: 8.0 2023-03-09 14:52:02,089 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.820e+02 2.535e+02 2.870e+02 3.429e+02 6.205e+02, threshold=5.739e+02, percent-clipped=0.0 2023-03-09 14:52:44,292 INFO [train.py:898] (3/4) Epoch 22, batch 300, loss[loss=0.1676, simple_loss=0.2578, pruned_loss=0.03866, over 18350.00 frames. ], tot_loss[loss=0.1635, simple_loss=0.2531, pruned_loss=0.03696, over 2786628.29 frames. ], batch size: 56, lr: 5.17e-03, grad_scale: 8.0 2023-03-09 14:52:55,713 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=76625.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 14:53:12,207 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4507, 2.8535, 4.3453, 3.5744, 2.6375, 4.5495, 3.8908, 2.8203], device='cuda:3'), covar=tensor([0.0603, 0.1412, 0.0243, 0.0480, 0.1523, 0.0193, 0.0516, 0.1041], device='cuda:3'), in_proj_covar=tensor([0.0212, 0.0239, 0.0210, 0.0164, 0.0223, 0.0212, 0.0246, 0.0199], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 14:53:21,217 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6655, 3.5214, 2.1865, 4.4107, 3.1752, 3.9717, 2.0989, 3.7991], device='cuda:3'), covar=tensor([0.0527, 0.0741, 0.1493, 0.0443, 0.0785, 0.0305, 0.1533, 0.0468], device='cuda:3'), in_proj_covar=tensor([0.0216, 0.0229, 0.0192, 0.0287, 0.0194, 0.0266, 0.0205, 0.0201], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 14:53:25,109 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=76650.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:53:42,870 INFO [train.py:898] (3/4) Epoch 22, batch 350, loss[loss=0.1462, simple_loss=0.2279, pruned_loss=0.03226, over 18403.00 frames. ], tot_loss[loss=0.162, simple_loss=0.2513, pruned_loss=0.03637, over 2972567.53 frames. ], batch size: 42, lr: 5.17e-03, grad_scale: 8.0 2023-03-09 14:53:58,883 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=76679.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 14:53:59,711 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.678e+02 2.562e+02 3.001e+02 3.817e+02 1.277e+03, threshold=6.002e+02, percent-clipped=2.0 2023-03-09 14:54:34,917 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5039, 3.2625, 2.0530, 4.2677, 2.9523, 3.7390, 2.0248, 3.5997], device='cuda:3'), covar=tensor([0.0597, 0.0855, 0.1616, 0.0521, 0.0885, 0.0420, 0.1549, 0.0524], device='cuda:3'), in_proj_covar=tensor([0.0218, 0.0231, 0.0194, 0.0289, 0.0195, 0.0269, 0.0206, 0.0202], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 14:54:41,500 INFO [train.py:898] (3/4) Epoch 22, batch 400, loss[loss=0.1876, simple_loss=0.2821, pruned_loss=0.04656, over 15847.00 frames. ], tot_loss[loss=0.1615, simple_loss=0.2507, pruned_loss=0.03613, over 3116347.05 frames. ], batch size: 94, lr: 5.17e-03, grad_scale: 8.0 2023-03-09 14:55:21,452 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-09 14:55:37,996 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8832, 5.4076, 5.3447, 5.3937, 4.8726, 5.2990, 4.6968, 5.2722], device='cuda:3'), covar=tensor([0.0210, 0.0240, 0.0193, 0.0459, 0.0408, 0.0211, 0.1114, 0.0295], device='cuda:3'), in_proj_covar=tensor([0.0217, 0.0261, 0.0256, 0.0331, 0.0271, 0.0269, 0.0312, 0.0262], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 14:55:40,010 INFO [train.py:898] (3/4) Epoch 22, batch 450, loss[loss=0.1747, simple_loss=0.2619, pruned_loss=0.04374, over 18277.00 frames. ], tot_loss[loss=0.1626, simple_loss=0.2521, pruned_loss=0.03651, over 3233944.95 frames. ], batch size: 57, lr: 5.17e-03, grad_scale: 8.0 2023-03-09 14:55:57,345 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.685e+02 2.597e+02 2.928e+02 3.376e+02 5.957e+02, threshold=5.857e+02, percent-clipped=0.0 2023-03-09 14:56:31,500 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-09 14:56:38,195 INFO [train.py:898] (3/4) Epoch 22, batch 500, loss[loss=0.1392, simple_loss=0.2205, pruned_loss=0.02895, over 17766.00 frames. ], tot_loss[loss=0.1634, simple_loss=0.2534, pruned_loss=0.03672, over 3316476.86 frames. ], batch size: 39, lr: 5.17e-03, grad_scale: 8.0 2023-03-09 14:56:58,439 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4849, 2.9282, 4.2426, 3.6190, 2.5637, 4.5233, 3.7981, 2.7085], device='cuda:3'), covar=tensor([0.0522, 0.1319, 0.0291, 0.0446, 0.1588, 0.0197, 0.0640, 0.1018], device='cuda:3'), in_proj_covar=tensor([0.0211, 0.0239, 0.0212, 0.0164, 0.0223, 0.0212, 0.0248, 0.0199], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 14:57:06,253 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=76839.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:57:36,636 INFO [train.py:898] (3/4) Epoch 22, batch 550, loss[loss=0.1494, simple_loss=0.2361, pruned_loss=0.03135, over 18270.00 frames. ], tot_loss[loss=0.1625, simple_loss=0.2524, pruned_loss=0.03629, over 3379226.29 frames. ], batch size: 45, lr: 5.17e-03, grad_scale: 8.0 2023-03-09 14:57:53,943 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.970e+02 2.654e+02 3.120e+02 3.965e+02 8.304e+02, threshold=6.239e+02, percent-clipped=1.0 2023-03-09 14:58:17,077 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=76900.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:58:28,036 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7884, 4.0241, 2.3096, 3.8787, 5.1349, 2.5899, 3.6600, 3.8777], device='cuda:3'), covar=tensor([0.0164, 0.1233, 0.1680, 0.0666, 0.0073, 0.1221, 0.0693, 0.0702], device='cuda:3'), in_proj_covar=tensor([0.0168, 0.0272, 0.0205, 0.0196, 0.0126, 0.0184, 0.0216, 0.0224], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 14:58:34,772 INFO [train.py:898] (3/4) Epoch 22, batch 600, loss[loss=0.1514, simple_loss=0.2363, pruned_loss=0.03319, over 18482.00 frames. ], tot_loss[loss=0.1619, simple_loss=0.252, pruned_loss=0.03592, over 3422317.75 frames. ], batch size: 47, lr: 5.16e-03, grad_scale: 8.0 2023-03-09 14:58:47,321 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=76925.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 14:59:14,934 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=76950.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 14:59:32,246 INFO [train.py:898] (3/4) Epoch 22, batch 650, loss[loss=0.1557, simple_loss=0.2362, pruned_loss=0.03761, over 18344.00 frames. ], tot_loss[loss=0.1624, simple_loss=0.2522, pruned_loss=0.03626, over 3459787.27 frames. ], batch size: 46, lr: 5.16e-03, grad_scale: 8.0 2023-03-09 14:59:42,090 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=76973.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 14:59:49,393 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=76979.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 14:59:50,127 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.663e+02 2.460e+02 2.912e+02 3.515e+02 5.898e+02, threshold=5.824e+02, percent-clipped=0.0 2023-03-09 15:00:10,840 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=76998.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:00:13,564 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.6194, 5.5647, 5.2233, 5.5054, 5.5326, 4.9019, 5.4200, 5.1916], device='cuda:3'), covar=tensor([0.0405, 0.0395, 0.1292, 0.0801, 0.0497, 0.0441, 0.0450, 0.0984], device='cuda:3'), in_proj_covar=tensor([0.0498, 0.0558, 0.0698, 0.0436, 0.0448, 0.0507, 0.0541, 0.0672], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 15:00:30,420 INFO [train.py:898] (3/4) Epoch 22, batch 700, loss[loss=0.1781, simple_loss=0.2682, pruned_loss=0.04398, over 17165.00 frames. ], tot_loss[loss=0.1628, simple_loss=0.2526, pruned_loss=0.03648, over 3489851.92 frames. ], batch size: 78, lr: 5.16e-03, grad_scale: 8.0 2023-03-09 15:00:33,486 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5015, 3.9639, 3.8814, 3.1108, 3.4113, 3.2033, 2.4892, 2.3209], device='cuda:3'), covar=tensor([0.0237, 0.0142, 0.0111, 0.0308, 0.0311, 0.0209, 0.0643, 0.0749], device='cuda:3'), in_proj_covar=tensor([0.0070, 0.0059, 0.0062, 0.0068, 0.0088, 0.0066, 0.0076, 0.0083], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 15:00:45,316 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=77027.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 15:01:22,738 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8104, 5.3714, 5.3243, 5.3728, 4.8553, 5.2783, 4.6632, 5.2490], device='cuda:3'), covar=tensor([0.0252, 0.0253, 0.0199, 0.0405, 0.0437, 0.0220, 0.1122, 0.0292], device='cuda:3'), in_proj_covar=tensor([0.0217, 0.0261, 0.0256, 0.0332, 0.0272, 0.0269, 0.0312, 0.0263], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 15:01:29,160 INFO [train.py:898] (3/4) Epoch 22, batch 750, loss[loss=0.144, simple_loss=0.2311, pruned_loss=0.02845, over 18484.00 frames. ], tot_loss[loss=0.1622, simple_loss=0.252, pruned_loss=0.03622, over 3524470.33 frames. ], batch size: 47, lr: 5.16e-03, grad_scale: 8.0 2023-03-09 15:01:47,414 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.651e+02 2.728e+02 3.361e+02 4.039e+02 1.393e+03, threshold=6.722e+02, percent-clipped=6.0 2023-03-09 15:01:50,078 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9595, 4.6543, 4.7315, 3.6190, 3.8346, 3.6762, 2.8817, 2.7545], device='cuda:3'), covar=tensor([0.0212, 0.0150, 0.0061, 0.0279, 0.0345, 0.0200, 0.0626, 0.0764], device='cuda:3'), in_proj_covar=tensor([0.0070, 0.0060, 0.0062, 0.0068, 0.0089, 0.0066, 0.0077, 0.0084], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 15:02:05,306 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=77095.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:02:27,399 INFO [train.py:898] (3/4) Epoch 22, batch 800, loss[loss=0.1665, simple_loss=0.2589, pruned_loss=0.03709, over 18474.00 frames. ], tot_loss[loss=0.1623, simple_loss=0.2523, pruned_loss=0.03615, over 3547136.94 frames. ], batch size: 53, lr: 5.16e-03, grad_scale: 8.0 2023-03-09 15:03:16,227 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=77156.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 15:03:25,831 INFO [train.py:898] (3/4) Epoch 22, batch 850, loss[loss=0.1525, simple_loss=0.2433, pruned_loss=0.03086, over 18539.00 frames. ], tot_loss[loss=0.1617, simple_loss=0.2516, pruned_loss=0.03588, over 3564431.03 frames. ], batch size: 49, lr: 5.16e-03, grad_scale: 8.0 2023-03-09 15:03:43,947 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.868e+02 2.628e+02 3.238e+02 3.999e+02 6.972e+02, threshold=6.476e+02, percent-clipped=1.0 2023-03-09 15:04:02,085 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=77195.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:04:06,939 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2987, 3.2790, 3.1581, 2.9863, 3.1882, 2.6449, 2.6240, 3.3232], device='cuda:3'), covar=tensor([0.0073, 0.0100, 0.0087, 0.0143, 0.0090, 0.0183, 0.0202, 0.0075], device='cuda:3'), in_proj_covar=tensor([0.0141, 0.0161, 0.0136, 0.0188, 0.0143, 0.0181, 0.0183, 0.0123], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 15:04:20,685 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=77211.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:04:24,959 INFO [train.py:898] (3/4) Epoch 22, batch 900, loss[loss=0.1625, simple_loss=0.2541, pruned_loss=0.0354, over 15680.00 frames. ], tot_loss[loss=0.1614, simple_loss=0.251, pruned_loss=0.03591, over 3576859.47 frames. ], batch size: 94, lr: 5.15e-03, grad_scale: 8.0 2023-03-09 15:05:23,901 INFO [train.py:898] (3/4) Epoch 22, batch 950, loss[loss=0.1571, simple_loss=0.2519, pruned_loss=0.0312, over 18617.00 frames. ], tot_loss[loss=0.1618, simple_loss=0.2515, pruned_loss=0.03603, over 3578260.89 frames. ], batch size: 52, lr: 5.15e-03, grad_scale: 8.0 2023-03-09 15:05:32,153 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=77272.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:05:33,082 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=77273.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:05:37,499 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=77277.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:05:41,086 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.729e+02 2.727e+02 3.251e+02 3.771e+02 1.514e+03, threshold=6.501e+02, percent-clipped=4.0 2023-03-09 15:06:01,121 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-09 15:06:01,417 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-09 15:06:22,531 INFO [train.py:898] (3/4) Epoch 22, batch 1000, loss[loss=0.1338, simple_loss=0.2202, pruned_loss=0.02367, over 18260.00 frames. ], tot_loss[loss=0.1618, simple_loss=0.2518, pruned_loss=0.03593, over 3581793.79 frames. ], batch size: 45, lr: 5.15e-03, grad_scale: 8.0 2023-03-09 15:06:45,455 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=77334.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:06:50,063 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=77338.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:07:21,351 INFO [train.py:898] (3/4) Epoch 22, batch 1050, loss[loss=0.1686, simple_loss=0.2584, pruned_loss=0.03943, over 18383.00 frames. ], tot_loss[loss=0.1614, simple_loss=0.2514, pruned_loss=0.03568, over 3583412.60 frames. ], batch size: 55, lr: 5.15e-03, grad_scale: 8.0 2023-03-09 15:07:38,062 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.901e+02 2.608e+02 3.171e+02 4.041e+02 8.045e+02, threshold=6.343e+02, percent-clipped=4.0 2023-03-09 15:07:55,135 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-09 15:08:10,914 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5959, 5.0379, 5.0016, 5.0759, 4.5592, 4.9548, 4.4478, 4.9409], device='cuda:3'), covar=tensor([0.0271, 0.0270, 0.0220, 0.0443, 0.0434, 0.0228, 0.0942, 0.0329], device='cuda:3'), in_proj_covar=tensor([0.0218, 0.0262, 0.0256, 0.0333, 0.0272, 0.0269, 0.0311, 0.0263], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 15:08:19,750 INFO [train.py:898] (3/4) Epoch 22, batch 1100, loss[loss=0.2071, simple_loss=0.2853, pruned_loss=0.06442, over 12608.00 frames. ], tot_loss[loss=0.1624, simple_loss=0.2521, pruned_loss=0.03633, over 3579125.05 frames. ], batch size: 130, lr: 5.15e-03, grad_scale: 8.0 2023-03-09 15:09:01,578 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9756, 4.1564, 2.5644, 4.1483, 5.2613, 2.5969, 3.7634, 4.0059], device='cuda:3'), covar=tensor([0.0192, 0.1268, 0.1743, 0.0696, 0.0083, 0.1426, 0.0762, 0.0757], device='cuda:3'), in_proj_covar=tensor([0.0168, 0.0274, 0.0204, 0.0196, 0.0128, 0.0184, 0.0215, 0.0225], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 15:09:02,514 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=77451.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 15:09:09,814 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9058, 4.9693, 5.0456, 4.7074, 4.7479, 4.7492, 5.0541, 5.0924], device='cuda:3'), covar=tensor([0.0071, 0.0065, 0.0057, 0.0131, 0.0067, 0.0163, 0.0085, 0.0111], device='cuda:3'), in_proj_covar=tensor([0.0094, 0.0069, 0.0075, 0.0093, 0.0075, 0.0104, 0.0086, 0.0086], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 15:09:18,513 INFO [train.py:898] (3/4) Epoch 22, batch 1150, loss[loss=0.1736, simple_loss=0.2653, pruned_loss=0.04095, over 17970.00 frames. ], tot_loss[loss=0.1622, simple_loss=0.2515, pruned_loss=0.0364, over 3583603.14 frames. ], batch size: 65, lr: 5.15e-03, grad_scale: 8.0 2023-03-09 15:09:35,428 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.520e+02 2.405e+02 2.730e+02 3.168e+02 5.091e+02, threshold=5.460e+02, percent-clipped=0.0 2023-03-09 15:09:52,930 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=77495.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:10:16,607 INFO [train.py:898] (3/4) Epoch 22, batch 1200, loss[loss=0.1553, simple_loss=0.2324, pruned_loss=0.03912, over 17651.00 frames. ], tot_loss[loss=0.1623, simple_loss=0.2519, pruned_loss=0.03632, over 3592855.38 frames. ], batch size: 39, lr: 5.14e-03, grad_scale: 8.0 2023-03-09 15:10:48,607 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=77543.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:10:55,172 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-09 15:11:15,451 INFO [train.py:898] (3/4) Epoch 22, batch 1250, loss[loss=0.1475, simple_loss=0.2273, pruned_loss=0.03382, over 18153.00 frames. ], tot_loss[loss=0.1615, simple_loss=0.2512, pruned_loss=0.03594, over 3597338.43 frames. ], batch size: 44, lr: 5.14e-03, grad_scale: 8.0 2023-03-09 15:11:18,453 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=77567.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:11:25,899 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-09 15:11:32,787 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.900e+02 2.671e+02 3.249e+02 3.869e+02 7.848e+02, threshold=6.498e+02, percent-clipped=7.0 2023-03-09 15:11:39,043 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.45 vs. limit=5.0 2023-03-09 15:12:13,400 INFO [train.py:898] (3/4) Epoch 22, batch 1300, loss[loss=0.1491, simple_loss=0.2311, pruned_loss=0.03357, over 17782.00 frames. ], tot_loss[loss=0.1622, simple_loss=0.2519, pruned_loss=0.03622, over 3611109.88 frames. ], batch size: 39, lr: 5.14e-03, grad_scale: 8.0 2023-03-09 15:12:14,878 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=77616.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:12:30,253 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=77629.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:12:34,968 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=77633.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:13:12,278 INFO [train.py:898] (3/4) Epoch 22, batch 1350, loss[loss=0.142, simple_loss=0.233, pruned_loss=0.02555, over 18547.00 frames. ], tot_loss[loss=0.1619, simple_loss=0.2516, pruned_loss=0.03611, over 3595676.71 frames. ], batch size: 45, lr: 5.14e-03, grad_scale: 8.0 2023-03-09 15:13:26,524 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=77677.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:13:26,547 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=77677.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:13:29,478 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.757e+02 2.485e+02 2.994e+02 3.704e+02 6.281e+02, threshold=5.988e+02, percent-clipped=0.0 2023-03-09 15:13:49,201 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7074, 3.7073, 3.5502, 3.3048, 3.4660, 2.8985, 2.8906, 3.7099], device='cuda:3'), covar=tensor([0.0070, 0.0097, 0.0095, 0.0131, 0.0114, 0.0206, 0.0217, 0.0072], device='cuda:3'), in_proj_covar=tensor([0.0141, 0.0161, 0.0135, 0.0189, 0.0143, 0.0180, 0.0183, 0.0122], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 15:14:05,293 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9095, 3.5715, 5.2572, 3.2640, 4.6306, 2.5856, 3.1822, 1.9091], device='cuda:3'), covar=tensor([0.1185, 0.1039, 0.0130, 0.0753, 0.0468, 0.2727, 0.2463, 0.2085], device='cuda:3'), in_proj_covar=tensor([0.0220, 0.0246, 0.0195, 0.0199, 0.0258, 0.0271, 0.0323, 0.0234], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 15:14:10,458 INFO [train.py:898] (3/4) Epoch 22, batch 1400, loss[loss=0.1642, simple_loss=0.2565, pruned_loss=0.036, over 17829.00 frames. ], tot_loss[loss=0.163, simple_loss=0.2526, pruned_loss=0.03673, over 3575304.07 frames. ], batch size: 70, lr: 5.14e-03, grad_scale: 4.0 2023-03-09 15:14:36,551 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=77738.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:14:50,911 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=77751.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 15:15:07,979 INFO [train.py:898] (3/4) Epoch 22, batch 1450, loss[loss=0.1621, simple_loss=0.2553, pruned_loss=0.03451, over 18386.00 frames. ], tot_loss[loss=0.1632, simple_loss=0.2525, pruned_loss=0.03691, over 3566399.07 frames. ], batch size: 50, lr: 5.14e-03, grad_scale: 4.0 2023-03-09 15:15:26,458 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.943e+02 2.691e+02 3.335e+02 4.479e+02 1.440e+03, threshold=6.670e+02, percent-clipped=5.0 2023-03-09 15:15:46,879 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=77799.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:16:06,818 INFO [train.py:898] (3/4) Epoch 22, batch 1500, loss[loss=0.1609, simple_loss=0.2353, pruned_loss=0.04323, over 18161.00 frames. ], tot_loss[loss=0.1625, simple_loss=0.252, pruned_loss=0.0365, over 3581013.73 frames. ], batch size: 44, lr: 5.13e-03, grad_scale: 4.0 2023-03-09 15:17:04,738 INFO [train.py:898] (3/4) Epoch 22, batch 1550, loss[loss=0.142, simple_loss=0.2247, pruned_loss=0.02962, over 17694.00 frames. ], tot_loss[loss=0.1626, simple_loss=0.2522, pruned_loss=0.03654, over 3581068.32 frames. ], batch size: 39, lr: 5.13e-03, grad_scale: 4.0 2023-03-09 15:17:07,465 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=77867.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:17:23,130 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.906e+02 2.725e+02 3.263e+02 3.842e+02 6.752e+02, threshold=6.526e+02, percent-clipped=2.0 2023-03-09 15:18:03,269 INFO [train.py:898] (3/4) Epoch 22, batch 1600, loss[loss=0.1768, simple_loss=0.2756, pruned_loss=0.03903, over 18619.00 frames. ], tot_loss[loss=0.1627, simple_loss=0.2525, pruned_loss=0.03651, over 3574313.58 frames. ], batch size: 52, lr: 5.13e-03, grad_scale: 8.0 2023-03-09 15:18:03,443 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=77915.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:18:19,239 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=77929.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:18:24,236 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=77933.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:18:33,164 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0438, 4.2318, 2.9326, 4.1405, 5.3935, 2.7878, 4.0763, 4.2131], device='cuda:3'), covar=tensor([0.0196, 0.1175, 0.1361, 0.0659, 0.0086, 0.1174, 0.0595, 0.0670], device='cuda:3'), in_proj_covar=tensor([0.0171, 0.0275, 0.0207, 0.0197, 0.0130, 0.0186, 0.0216, 0.0228], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 15:18:52,551 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.82 vs. limit=2.0 2023-03-09 15:19:00,971 INFO [train.py:898] (3/4) Epoch 22, batch 1650, loss[loss=0.1501, simple_loss=0.2242, pruned_loss=0.03797, over 17684.00 frames. ], tot_loss[loss=0.163, simple_loss=0.2528, pruned_loss=0.0366, over 3579464.10 frames. ], batch size: 39, lr: 5.13e-03, grad_scale: 8.0 2023-03-09 15:19:09,471 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=77972.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:19:15,225 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=77977.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:19:19,841 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=77981.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:19:20,797 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.050e+02 2.525e+02 3.024e+02 3.859e+02 1.001e+03, threshold=6.048e+02, percent-clipped=3.0 2023-03-09 15:19:26,272 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5710, 2.9287, 4.3765, 3.7094, 2.8945, 4.6031, 3.8734, 2.7886], device='cuda:3'), covar=tensor([0.0540, 0.1412, 0.0281, 0.0422, 0.1464, 0.0198, 0.0554, 0.1026], device='cuda:3'), in_proj_covar=tensor([0.0212, 0.0238, 0.0211, 0.0163, 0.0222, 0.0211, 0.0247, 0.0196], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 15:20:03,120 INFO [train.py:898] (3/4) Epoch 22, batch 1700, loss[loss=0.1407, simple_loss=0.2208, pruned_loss=0.03035, over 18421.00 frames. ], tot_loss[loss=0.1627, simple_loss=0.2525, pruned_loss=0.03646, over 3588071.17 frames. ], batch size: 43, lr: 5.13e-03, grad_scale: 4.0 2023-03-09 15:20:25,579 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=78033.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:21:01,844 INFO [train.py:898] (3/4) Epoch 22, batch 1750, loss[loss=0.1828, simple_loss=0.2724, pruned_loss=0.04657, over 18278.00 frames. ], tot_loss[loss=0.1628, simple_loss=0.2525, pruned_loss=0.03652, over 3589366.87 frames. ], batch size: 57, lr: 5.13e-03, grad_scale: 4.0 2023-03-09 15:21:22,782 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.851e+02 2.720e+02 3.295e+02 3.984e+02 2.464e+03, threshold=6.591e+02, percent-clipped=4.0 2023-03-09 15:22:00,368 INFO [train.py:898] (3/4) Epoch 22, batch 1800, loss[loss=0.1651, simple_loss=0.2655, pruned_loss=0.03235, over 18334.00 frames. ], tot_loss[loss=0.1629, simple_loss=0.253, pruned_loss=0.03642, over 3587400.73 frames. ], batch size: 55, lr: 5.12e-03, grad_scale: 4.0 2023-03-09 15:22:58,103 INFO [train.py:898] (3/4) Epoch 22, batch 1850, loss[loss=0.1616, simple_loss=0.2507, pruned_loss=0.03625, over 18360.00 frames. ], tot_loss[loss=0.1626, simple_loss=0.2524, pruned_loss=0.03644, over 3587883.25 frames. ], batch size: 50, lr: 5.12e-03, grad_scale: 4.0 2023-03-09 15:23:10,773 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-09 15:23:19,117 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.849e+02 2.526e+02 2.955e+02 3.559e+02 8.124e+02, threshold=5.910e+02, percent-clipped=2.0 2023-03-09 15:23:57,003 INFO [train.py:898] (3/4) Epoch 22, batch 1900, loss[loss=0.1453, simple_loss=0.2314, pruned_loss=0.02957, over 18389.00 frames. ], tot_loss[loss=0.1628, simple_loss=0.2524, pruned_loss=0.0366, over 3593749.85 frames. ], batch size: 48, lr: 5.12e-03, grad_scale: 4.0 2023-03-09 15:24:09,990 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8454, 4.5161, 4.5485, 3.3850, 3.7018, 3.4794, 2.6371, 2.5223], device='cuda:3'), covar=tensor([0.0218, 0.0180, 0.0075, 0.0314, 0.0336, 0.0242, 0.0720, 0.0805], device='cuda:3'), in_proj_covar=tensor([0.0071, 0.0061, 0.0063, 0.0069, 0.0089, 0.0067, 0.0077, 0.0085], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 15:24:18,246 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=78232.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:24:21,537 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9124, 4.6300, 4.6522, 3.4819, 3.8007, 3.5782, 2.6683, 2.4609], device='cuda:3'), covar=tensor([0.0204, 0.0144, 0.0060, 0.0297, 0.0292, 0.0213, 0.0718, 0.0840], device='cuda:3'), in_proj_covar=tensor([0.0071, 0.0060, 0.0062, 0.0069, 0.0089, 0.0067, 0.0077, 0.0085], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 15:24:43,815 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8345, 3.2450, 3.9422, 2.9078, 3.6836, 2.7104, 2.7864, 2.3738], device='cuda:3'), covar=tensor([0.1009, 0.0891, 0.0296, 0.0743, 0.0594, 0.2096, 0.2336, 0.1634], device='cuda:3'), in_proj_covar=tensor([0.0223, 0.0247, 0.0197, 0.0201, 0.0260, 0.0276, 0.0329, 0.0237], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 15:24:48,070 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5488, 3.4409, 2.2328, 4.4174, 3.0189, 4.1257, 2.5154, 3.8359], device='cuda:3'), covar=tensor([0.0637, 0.0856, 0.1506, 0.0476, 0.0884, 0.0304, 0.1236, 0.0455], device='cuda:3'), in_proj_covar=tensor([0.0215, 0.0227, 0.0192, 0.0286, 0.0193, 0.0266, 0.0204, 0.0202], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 15:24:55,259 INFO [train.py:898] (3/4) Epoch 22, batch 1950, loss[loss=0.1561, simple_loss=0.2391, pruned_loss=0.03658, over 18364.00 frames. ], tot_loss[loss=0.1621, simple_loss=0.2515, pruned_loss=0.03634, over 3601444.11 frames. ], batch size: 46, lr: 5.12e-03, grad_scale: 4.0 2023-03-09 15:25:03,415 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=78272.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:25:09,340 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5784, 2.6106, 2.6049, 2.5185, 2.5773, 2.2383, 2.2671, 2.6473], device='cuda:3'), covar=tensor([0.0079, 0.0109, 0.0077, 0.0111, 0.0099, 0.0152, 0.0190, 0.0085], device='cuda:3'), in_proj_covar=tensor([0.0141, 0.0162, 0.0135, 0.0189, 0.0144, 0.0180, 0.0182, 0.0122], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 15:25:14,978 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.944e+02 2.648e+02 3.049e+02 3.777e+02 6.458e+02, threshold=6.098e+02, percent-clipped=2.0 2023-03-09 15:25:28,646 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=78293.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 15:25:35,417 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3803, 2.6600, 2.3987, 2.7964, 3.4793, 3.3277, 2.9253, 2.8187], device='cuda:3'), covar=tensor([0.0186, 0.0309, 0.0553, 0.0363, 0.0196, 0.0179, 0.0383, 0.0401], device='cuda:3'), in_proj_covar=tensor([0.0141, 0.0137, 0.0163, 0.0159, 0.0132, 0.0120, 0.0155, 0.0160], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 15:25:53,380 INFO [train.py:898] (3/4) Epoch 22, batch 2000, loss[loss=0.1757, simple_loss=0.2727, pruned_loss=0.03932, over 18026.00 frames. ], tot_loss[loss=0.1615, simple_loss=0.2511, pruned_loss=0.03597, over 3602382.37 frames. ], batch size: 65, lr: 5.12e-03, grad_scale: 8.0 2023-03-09 15:25:59,294 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=78320.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:26:14,754 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=78333.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:26:47,366 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8046, 3.3182, 4.4308, 2.8230, 3.9854, 2.6275, 2.7808, 2.0260], device='cuda:3'), covar=tensor([0.1155, 0.0967, 0.0233, 0.0929, 0.0568, 0.2490, 0.2636, 0.1983], device='cuda:3'), in_proj_covar=tensor([0.0221, 0.0244, 0.0196, 0.0199, 0.0257, 0.0273, 0.0325, 0.0234], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 15:26:52,359 INFO [train.py:898] (3/4) Epoch 22, batch 2050, loss[loss=0.1663, simple_loss=0.2527, pruned_loss=0.0399, over 18398.00 frames. ], tot_loss[loss=0.1616, simple_loss=0.2512, pruned_loss=0.03605, over 3602659.82 frames. ], batch size: 48, lr: 5.12e-03, grad_scale: 8.0 2023-03-09 15:27:10,381 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=78381.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:27:11,260 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.641e+02 2.890e+02 3.341e+02 3.968e+02 7.791e+02, threshold=6.681e+02, percent-clipped=3.0 2023-03-09 15:27:50,918 INFO [train.py:898] (3/4) Epoch 22, batch 2100, loss[loss=0.1551, simple_loss=0.2464, pruned_loss=0.03187, over 18495.00 frames. ], tot_loss[loss=0.1614, simple_loss=0.2508, pruned_loss=0.03602, over 3596841.10 frames. ], batch size: 47, lr: 5.11e-03, grad_scale: 8.0 2023-03-09 15:27:51,628 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.90 vs. limit=5.0 2023-03-09 15:28:02,377 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1531, 3.0989, 1.9853, 3.8284, 2.6824, 3.5336, 2.2593, 3.2602], device='cuda:3'), covar=tensor([0.0700, 0.0912, 0.1511, 0.0566, 0.0876, 0.0410, 0.1249, 0.0487], device='cuda:3'), in_proj_covar=tensor([0.0216, 0.0227, 0.0193, 0.0286, 0.0193, 0.0265, 0.0204, 0.0202], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 15:28:43,398 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=78460.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:28:49,243 INFO [train.py:898] (3/4) Epoch 22, batch 2150, loss[loss=0.1323, simple_loss=0.2136, pruned_loss=0.0255, over 18499.00 frames. ], tot_loss[loss=0.1623, simple_loss=0.252, pruned_loss=0.03631, over 3597825.59 frames. ], batch size: 44, lr: 5.11e-03, grad_scale: 8.0 2023-03-09 15:29:08,264 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.757e+02 2.625e+02 3.036e+02 3.633e+02 1.479e+03, threshold=6.073e+02, percent-clipped=4.0 2023-03-09 15:29:25,841 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8839, 4.5464, 4.5859, 3.4585, 3.7131, 3.6276, 2.7267, 2.4925], device='cuda:3'), covar=tensor([0.0225, 0.0171, 0.0068, 0.0303, 0.0312, 0.0208, 0.0751, 0.0860], device='cuda:3'), in_proj_covar=tensor([0.0070, 0.0059, 0.0062, 0.0069, 0.0088, 0.0066, 0.0076, 0.0084], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 15:29:47,487 INFO [train.py:898] (3/4) Epoch 22, batch 2200, loss[loss=0.1724, simple_loss=0.2626, pruned_loss=0.04112, over 18205.00 frames. ], tot_loss[loss=0.162, simple_loss=0.2516, pruned_loss=0.03619, over 3593413.74 frames. ], batch size: 60, lr: 5.11e-03, grad_scale: 8.0 2023-03-09 15:29:55,165 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=78521.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:30:35,219 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=78556.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:30:44,767 INFO [train.py:898] (3/4) Epoch 22, batch 2250, loss[loss=0.1776, simple_loss=0.2682, pruned_loss=0.04345, over 18216.00 frames. ], tot_loss[loss=0.1623, simple_loss=0.2519, pruned_loss=0.03631, over 3603364.81 frames. ], batch size: 60, lr: 5.11e-03, grad_scale: 8.0 2023-03-09 15:31:04,900 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.048e+02 2.745e+02 3.101e+02 3.633e+02 7.718e+02, threshold=6.202e+02, percent-clipped=2.0 2023-03-09 15:31:11,573 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=78588.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 15:31:43,707 INFO [train.py:898] (3/4) Epoch 22, batch 2300, loss[loss=0.1534, simple_loss=0.2451, pruned_loss=0.03085, over 18272.00 frames. ], tot_loss[loss=0.1629, simple_loss=0.2526, pruned_loss=0.03661, over 3590218.80 frames. ], batch size: 54, lr: 5.11e-03, grad_scale: 8.0 2023-03-09 15:31:46,344 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=78617.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:32:42,002 INFO [train.py:898] (3/4) Epoch 22, batch 2350, loss[loss=0.1605, simple_loss=0.2518, pruned_loss=0.03466, over 18504.00 frames. ], tot_loss[loss=0.1627, simple_loss=0.2527, pruned_loss=0.03637, over 3597424.53 frames. ], batch size: 53, lr: 5.11e-03, grad_scale: 8.0 2023-03-09 15:33:01,735 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.907e+02 2.584e+02 2.934e+02 3.285e+02 5.590e+02, threshold=5.868e+02, percent-clipped=0.0 2023-03-09 15:33:21,212 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6493, 4.6700, 4.6650, 4.4288, 4.4839, 4.5200, 4.7419, 4.7885], device='cuda:3'), covar=tensor([0.0076, 0.0069, 0.0067, 0.0126, 0.0072, 0.0161, 0.0090, 0.0098], device='cuda:3'), in_proj_covar=tensor([0.0095, 0.0070, 0.0076, 0.0094, 0.0076, 0.0105, 0.0088, 0.0087], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 15:33:40,404 INFO [train.py:898] (3/4) Epoch 22, batch 2400, loss[loss=0.179, simple_loss=0.264, pruned_loss=0.04697, over 18256.00 frames. ], tot_loss[loss=0.1634, simple_loss=0.2534, pruned_loss=0.03671, over 3593325.34 frames. ], batch size: 60, lr: 5.10e-03, grad_scale: 8.0 2023-03-09 15:33:44,471 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-09 15:34:39,541 INFO [train.py:898] (3/4) Epoch 22, batch 2450, loss[loss=0.1434, simple_loss=0.2251, pruned_loss=0.03082, over 18502.00 frames. ], tot_loss[loss=0.1637, simple_loss=0.2534, pruned_loss=0.03698, over 3576440.64 frames. ], batch size: 44, lr: 5.10e-03, grad_scale: 4.0 2023-03-09 15:35:00,356 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.881e+02 2.648e+02 3.233e+02 3.743e+02 6.587e+02, threshold=6.467e+02, percent-clipped=2.0 2023-03-09 15:35:38,713 INFO [train.py:898] (3/4) Epoch 22, batch 2500, loss[loss=0.1538, simple_loss=0.2521, pruned_loss=0.02776, over 18475.00 frames. ], tot_loss[loss=0.163, simple_loss=0.2527, pruned_loss=0.03666, over 3585136.30 frames. ], batch size: 53, lr: 5.10e-03, grad_scale: 4.0 2023-03-09 15:35:39,958 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=78816.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:36:37,305 INFO [train.py:898] (3/4) Epoch 22, batch 2550, loss[loss=0.1843, simple_loss=0.2733, pruned_loss=0.04769, over 16841.00 frames. ], tot_loss[loss=0.1628, simple_loss=0.2523, pruned_loss=0.03667, over 3588000.58 frames. ], batch size: 78, lr: 5.10e-03, grad_scale: 4.0 2023-03-09 15:36:57,001 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.832e+02 2.718e+02 3.165e+02 3.980e+02 2.438e+03, threshold=6.330e+02, percent-clipped=4.0 2023-03-09 15:37:03,693 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=78888.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:37:30,384 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=78912.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:37:34,108 INFO [train.py:898] (3/4) Epoch 22, batch 2600, loss[loss=0.2015, simple_loss=0.2848, pruned_loss=0.05911, over 12432.00 frames. ], tot_loss[loss=0.163, simple_loss=0.2527, pruned_loss=0.03664, over 3585320.68 frames. ], batch size: 129, lr: 5.10e-03, grad_scale: 4.0 2023-03-09 15:37:58,522 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=78936.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:38:31,475 INFO [train.py:898] (3/4) Epoch 22, batch 2650, loss[loss=0.176, simple_loss=0.2659, pruned_loss=0.04309, over 18629.00 frames. ], tot_loss[loss=0.163, simple_loss=0.2527, pruned_loss=0.03661, over 3586655.57 frames. ], batch size: 52, lr: 5.10e-03, grad_scale: 4.0 2023-03-09 15:38:31,873 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=78965.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:38:52,453 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.743e+02 2.628e+02 3.171e+02 3.718e+02 9.310e+02, threshold=6.343e+02, percent-clipped=1.0 2023-03-09 15:39:16,141 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5804, 3.3794, 2.1501, 4.3754, 3.1546, 4.1525, 2.4088, 3.8739], device='cuda:3'), covar=tensor([0.0643, 0.0821, 0.1505, 0.0474, 0.0787, 0.0254, 0.1208, 0.0403], device='cuda:3'), in_proj_covar=tensor([0.0214, 0.0225, 0.0191, 0.0285, 0.0192, 0.0261, 0.0203, 0.0201], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 15:39:28,933 INFO [train.py:898] (3/4) Epoch 22, batch 2700, loss[loss=0.1452, simple_loss=0.2425, pruned_loss=0.02395, over 18570.00 frames. ], tot_loss[loss=0.1632, simple_loss=0.2529, pruned_loss=0.03674, over 3583976.99 frames. ], batch size: 54, lr: 5.09e-03, grad_scale: 4.0 2023-03-09 15:39:42,949 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79026.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:39:55,728 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-09 15:39:56,899 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-09 15:40:18,242 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.77 vs. limit=5.0 2023-03-09 15:40:27,082 INFO [train.py:898] (3/4) Epoch 22, batch 2750, loss[loss=0.185, simple_loss=0.2763, pruned_loss=0.04685, over 18483.00 frames. ], tot_loss[loss=0.1626, simple_loss=0.2521, pruned_loss=0.03653, over 3592696.36 frames. ], batch size: 59, lr: 5.09e-03, grad_scale: 4.0 2023-03-09 15:40:48,101 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.755e+02 2.673e+02 3.245e+02 3.869e+02 1.413e+03, threshold=6.491e+02, percent-clipped=1.0 2023-03-09 15:41:15,292 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5733, 3.4044, 2.2439, 4.4434, 3.1629, 4.2524, 2.4972, 4.0437], device='cuda:3'), covar=tensor([0.0685, 0.0828, 0.1540, 0.0467, 0.0817, 0.0307, 0.1253, 0.0391], device='cuda:3'), in_proj_covar=tensor([0.0217, 0.0228, 0.0192, 0.0288, 0.0194, 0.0265, 0.0206, 0.0203], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 15:41:25,131 INFO [train.py:898] (3/4) Epoch 22, batch 2800, loss[loss=0.1791, simple_loss=0.2722, pruned_loss=0.04297, over 18065.00 frames. ], tot_loss[loss=0.1627, simple_loss=0.2522, pruned_loss=0.03666, over 3578605.79 frames. ], batch size: 65, lr: 5.09e-03, grad_scale: 8.0 2023-03-09 15:41:26,493 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=79116.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:42:22,283 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=79164.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:42:23,246 INFO [train.py:898] (3/4) Epoch 22, batch 2850, loss[loss=0.1524, simple_loss=0.2455, pruned_loss=0.02965, over 18488.00 frames. ], tot_loss[loss=0.163, simple_loss=0.2525, pruned_loss=0.03676, over 3566964.40 frames. ], batch size: 51, lr: 5.09e-03, grad_scale: 8.0 2023-03-09 15:42:45,535 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.599e+02 2.640e+02 3.106e+02 3.691e+02 7.833e+02, threshold=6.212e+02, percent-clipped=1.0 2023-03-09 15:43:19,591 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=79212.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:43:22,672 INFO [train.py:898] (3/4) Epoch 22, batch 2900, loss[loss=0.1827, simple_loss=0.2649, pruned_loss=0.05031, over 18491.00 frames. ], tot_loss[loss=0.1629, simple_loss=0.2528, pruned_loss=0.03647, over 3588133.77 frames. ], batch size: 59, lr: 5.09e-03, grad_scale: 4.0 2023-03-09 15:44:15,606 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=79260.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:44:21,242 INFO [train.py:898] (3/4) Epoch 22, batch 2950, loss[loss=0.1696, simple_loss=0.2621, pruned_loss=0.03855, over 18298.00 frames. ], tot_loss[loss=0.1627, simple_loss=0.2522, pruned_loss=0.03658, over 3588556.63 frames. ], batch size: 57, lr: 5.09e-03, grad_scale: 4.0 2023-03-09 15:44:34,928 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1033, 3.8542, 5.2096, 2.8364, 4.5860, 2.6093, 3.0882, 1.8458], device='cuda:3'), covar=tensor([0.1027, 0.0824, 0.0124, 0.0971, 0.0483, 0.2674, 0.2723, 0.2195], device='cuda:3'), in_proj_covar=tensor([0.0222, 0.0246, 0.0198, 0.0200, 0.0259, 0.0273, 0.0323, 0.0236], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 15:44:42,794 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.956e+02 2.705e+02 3.045e+02 3.689e+02 6.352e+02, threshold=6.090e+02, percent-clipped=1.0 2023-03-09 15:44:44,792 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79285.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 15:45:07,883 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9754, 4.0788, 2.5938, 4.1726, 5.3477, 2.6878, 3.9273, 4.0435], device='cuda:3'), covar=tensor([0.0192, 0.1248, 0.1501, 0.0561, 0.0072, 0.1105, 0.0599, 0.0693], device='cuda:3'), in_proj_covar=tensor([0.0169, 0.0270, 0.0202, 0.0195, 0.0128, 0.0182, 0.0214, 0.0223], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 15:45:19,367 INFO [train.py:898] (3/4) Epoch 22, batch 3000, loss[loss=0.1613, simple_loss=0.2555, pruned_loss=0.03357, over 18402.00 frames. ], tot_loss[loss=0.1626, simple_loss=0.2521, pruned_loss=0.0365, over 3583720.22 frames. ], batch size: 52, lr: 5.09e-03, grad_scale: 4.0 2023-03-09 15:45:19,367 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 15:45:31,240 INFO [train.py:932] (3/4) Epoch 22, validation: loss=0.1498, simple_loss=0.249, pruned_loss=0.02526, over 944034.00 frames. 2023-03-09 15:45:31,241 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 15:45:39,497 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=79321.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:45:39,853 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-09 15:45:53,390 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79333.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:46:08,282 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79346.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 15:46:21,471 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-09 15:46:28,610 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79364.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:46:29,399 INFO [train.py:898] (3/4) Epoch 22, batch 3050, loss[loss=0.1697, simple_loss=0.26, pruned_loss=0.03971, over 18382.00 frames. ], tot_loss[loss=0.1626, simple_loss=0.2524, pruned_loss=0.03641, over 3588809.31 frames. ], batch size: 56, lr: 5.08e-03, grad_scale: 4.0 2023-03-09 15:46:42,470 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79375.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 15:46:52,687 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.974e+02 2.697e+02 3.145e+02 3.968e+02 7.194e+02, threshold=6.290e+02, percent-clipped=6.0 2023-03-09 15:47:04,557 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79394.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:47:28,489 INFO [train.py:898] (3/4) Epoch 22, batch 3100, loss[loss=0.1395, simple_loss=0.2297, pruned_loss=0.02461, over 18405.00 frames. ], tot_loss[loss=0.1618, simple_loss=0.2514, pruned_loss=0.03611, over 3593124.40 frames. ], batch size: 48, lr: 5.08e-03, grad_scale: 4.0 2023-03-09 15:47:40,609 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79425.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:47:54,518 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79436.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 15:48:18,258 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.50 vs. limit=2.0 2023-03-09 15:48:27,415 INFO [train.py:898] (3/4) Epoch 22, batch 3150, loss[loss=0.183, simple_loss=0.267, pruned_loss=0.04956, over 12714.00 frames. ], tot_loss[loss=0.1613, simple_loss=0.2509, pruned_loss=0.03583, over 3591887.11 frames. ], batch size: 129, lr: 5.08e-03, grad_scale: 4.0 2023-03-09 15:48:50,535 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.854e+02 2.702e+02 3.215e+02 3.838e+02 6.492e+02, threshold=6.430e+02, percent-clipped=3.0 2023-03-09 15:49:25,663 INFO [train.py:898] (3/4) Epoch 22, batch 3200, loss[loss=0.1434, simple_loss=0.2309, pruned_loss=0.02794, over 18365.00 frames. ], tot_loss[loss=0.1608, simple_loss=0.2503, pruned_loss=0.03562, over 3587833.10 frames. ], batch size: 46, lr: 5.08e-03, grad_scale: 8.0 2023-03-09 15:49:37,572 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.60 vs. limit=5.0 2023-03-09 15:50:08,311 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0962, 5.2235, 5.1828, 4.8960, 4.9925, 4.9903, 5.2635, 5.3084], device='cuda:3'), covar=tensor([0.0066, 0.0057, 0.0059, 0.0110, 0.0049, 0.0161, 0.0068, 0.0085], device='cuda:3'), in_proj_covar=tensor([0.0096, 0.0071, 0.0076, 0.0095, 0.0076, 0.0106, 0.0088, 0.0087], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 15:50:24,061 INFO [train.py:898] (3/4) Epoch 22, batch 3250, loss[loss=0.1662, simple_loss=0.2702, pruned_loss=0.03109, over 18569.00 frames. ], tot_loss[loss=0.1607, simple_loss=0.2504, pruned_loss=0.03552, over 3597179.20 frames. ], batch size: 54, lr: 5.08e-03, grad_scale: 8.0 2023-03-09 15:50:38,955 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9929, 5.1597, 5.3553, 5.3607, 4.9576, 5.8604, 5.4528, 5.1785], device='cuda:3'), covar=tensor([0.1428, 0.0687, 0.0746, 0.0832, 0.1554, 0.0749, 0.0714, 0.1660], device='cuda:3'), in_proj_covar=tensor([0.0363, 0.0286, 0.0317, 0.0315, 0.0331, 0.0430, 0.0286, 0.0423], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 15:50:46,085 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.828e+02 2.603e+02 3.051e+02 3.520e+02 6.535e+02, threshold=6.103e+02, percent-clipped=1.0 2023-03-09 15:50:47,589 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79585.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:51:21,968 INFO [train.py:898] (3/4) Epoch 22, batch 3300, loss[loss=0.1783, simple_loss=0.2725, pruned_loss=0.0421, over 18565.00 frames. ], tot_loss[loss=0.1614, simple_loss=0.2512, pruned_loss=0.03574, over 3596877.05 frames. ], batch size: 54, lr: 5.08e-03, grad_scale: 8.0 2023-03-09 15:51:28,948 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=79621.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:51:36,937 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9416, 4.1276, 2.3033, 4.0009, 5.2462, 2.5267, 3.8705, 4.0054], device='cuda:3'), covar=tensor([0.0202, 0.1351, 0.1902, 0.0753, 0.0093, 0.1438, 0.0693, 0.0765], device='cuda:3'), in_proj_covar=tensor([0.0171, 0.0273, 0.0205, 0.0198, 0.0130, 0.0184, 0.0217, 0.0226], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 15:51:52,904 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=79641.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 15:51:58,662 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79646.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:52:09,992 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.13 vs. limit=5.0 2023-03-09 15:52:10,881 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79657.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:52:17,073 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6149, 3.5742, 3.5722, 3.1937, 3.4538, 2.8976, 2.7438, 3.6839], device='cuda:3'), covar=tensor([0.0070, 0.0096, 0.0079, 0.0138, 0.0086, 0.0167, 0.0202, 0.0058], device='cuda:3'), in_proj_covar=tensor([0.0142, 0.0161, 0.0135, 0.0188, 0.0144, 0.0179, 0.0181, 0.0122], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 15:52:19,977 INFO [train.py:898] (3/4) Epoch 22, batch 3350, loss[loss=0.1819, simple_loss=0.2795, pruned_loss=0.0422, over 17181.00 frames. ], tot_loss[loss=0.1622, simple_loss=0.252, pruned_loss=0.03617, over 3581292.42 frames. ], batch size: 78, lr: 5.07e-03, grad_scale: 4.0 2023-03-09 15:52:24,604 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=79669.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:52:42,674 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.916e+02 2.736e+02 3.253e+02 3.881e+02 1.172e+03, threshold=6.507e+02, percent-clipped=7.0 2023-03-09 15:52:47,487 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=79689.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:53:08,018 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.33 vs. limit=5.0 2023-03-09 15:53:18,222 INFO [train.py:898] (3/4) Epoch 22, batch 3400, loss[loss=0.1616, simple_loss=0.2523, pruned_loss=0.03541, over 18394.00 frames. ], tot_loss[loss=0.162, simple_loss=0.2519, pruned_loss=0.03603, over 3579680.65 frames. ], batch size: 50, lr: 5.07e-03, grad_scale: 4.0 2023-03-09 15:53:22,079 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79718.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:53:24,117 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=79720.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:53:36,307 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=79731.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 15:54:12,547 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79762.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:54:15,571 INFO [train.py:898] (3/4) Epoch 22, batch 3450, loss[loss=0.1585, simple_loss=0.2527, pruned_loss=0.03219, over 18502.00 frames. ], tot_loss[loss=0.1622, simple_loss=0.252, pruned_loss=0.03615, over 3582855.22 frames. ], batch size: 51, lr: 5.07e-03, grad_scale: 4.0 2023-03-09 15:54:38,366 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.822e+02 2.532e+02 2.897e+02 3.710e+02 6.365e+02, threshold=5.795e+02, percent-clipped=0.0 2023-03-09 15:54:44,623 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7046, 2.6224, 4.3654, 3.8156, 2.2653, 4.6386, 3.9362, 2.6986], device='cuda:3'), covar=tensor([0.0479, 0.1935, 0.0365, 0.0408, 0.2201, 0.0243, 0.0602, 0.1381], device='cuda:3'), in_proj_covar=tensor([0.0213, 0.0236, 0.0211, 0.0164, 0.0223, 0.0209, 0.0247, 0.0195], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 15:54:48,496 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7736, 4.4293, 4.4448, 3.3649, 3.6310, 3.2964, 2.5047, 2.4803], device='cuda:3'), covar=tensor([0.0210, 0.0136, 0.0080, 0.0330, 0.0390, 0.0280, 0.0801, 0.0890], device='cuda:3'), in_proj_covar=tensor([0.0070, 0.0060, 0.0063, 0.0069, 0.0089, 0.0067, 0.0077, 0.0085], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 15:55:10,083 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8070, 2.9416, 2.6968, 3.0346, 3.8924, 3.7657, 3.2878, 3.0032], device='cuda:3'), covar=tensor([0.0149, 0.0350, 0.0606, 0.0394, 0.0197, 0.0145, 0.0351, 0.0405], device='cuda:3'), in_proj_covar=tensor([0.0145, 0.0143, 0.0169, 0.0165, 0.0137, 0.0123, 0.0159, 0.0164], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 15:55:14,088 INFO [train.py:898] (3/4) Epoch 22, batch 3500, loss[loss=0.1816, simple_loss=0.2735, pruned_loss=0.04482, over 18352.00 frames. ], tot_loss[loss=0.1622, simple_loss=0.2522, pruned_loss=0.0361, over 3588786.36 frames. ], batch size: 56, lr: 5.07e-03, grad_scale: 4.0 2023-03-09 15:55:23,997 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79823.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:56:09,526 INFO [train.py:898] (3/4) Epoch 22, batch 3550, loss[loss=0.1666, simple_loss=0.261, pruned_loss=0.03611, over 18380.00 frames. ], tot_loss[loss=0.162, simple_loss=0.2519, pruned_loss=0.03602, over 3592692.26 frames. ], batch size: 50, lr: 5.07e-03, grad_scale: 4.0 2023-03-09 15:56:32,290 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.670e+02 2.485e+02 2.929e+02 3.619e+02 1.143e+03, threshold=5.859e+02, percent-clipped=2.0 2023-03-09 15:57:05,048 INFO [train.py:898] (3/4) Epoch 22, batch 3600, loss[loss=0.1479, simple_loss=0.2305, pruned_loss=0.03272, over 18272.00 frames. ], tot_loss[loss=0.1623, simple_loss=0.2523, pruned_loss=0.03616, over 3582123.48 frames. ], batch size: 47, lr: 5.07e-03, grad_scale: 8.0 2023-03-09 15:57:20,661 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7167, 3.4636, 2.6684, 4.5160, 3.2900, 4.3072, 2.5533, 3.9919], device='cuda:3'), covar=tensor([0.0620, 0.0856, 0.1258, 0.0462, 0.0779, 0.0309, 0.1213, 0.0396], device='cuda:3'), in_proj_covar=tensor([0.0216, 0.0226, 0.0190, 0.0286, 0.0193, 0.0264, 0.0204, 0.0202], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 15:57:32,504 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=79941.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:57:32,556 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=79941.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 15:58:07,277 INFO [train.py:898] (3/4) Epoch 23, batch 0, loss[loss=0.1844, simple_loss=0.2723, pruned_loss=0.04828, over 12732.00 frames. ], tot_loss[loss=0.1844, simple_loss=0.2723, pruned_loss=0.04828, over 12732.00 frames. ], batch size: 130, lr: 4.95e-03, grad_scale: 8.0 2023-03-09 15:58:07,278 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 15:58:18,935 INFO [train.py:932] (3/4) Epoch 23, validation: loss=0.1494, simple_loss=0.2493, pruned_loss=0.02473, over 944034.00 frames. 2023-03-09 15:58:18,936 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 15:59:01,337 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.492e+02 2.757e+02 3.307e+02 4.142e+02 8.059e+02, threshold=6.615e+02, percent-clipped=1.0 2023-03-09 15:59:06,150 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=79989.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 15:59:06,187 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=79989.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:59:14,612 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-09 15:59:17,066 INFO [train.py:898] (3/4) Epoch 23, batch 50, loss[loss=0.1634, simple_loss=0.2588, pruned_loss=0.034, over 18241.00 frames. ], tot_loss[loss=0.1663, simple_loss=0.2566, pruned_loss=0.03803, over 807764.37 frames. ], batch size: 60, lr: 4.95e-03, grad_scale: 8.0 2023-03-09 15:59:32,136 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5213, 5.9798, 5.4874, 5.7925, 5.6200, 5.4667, 6.0643, 5.9935], device='cuda:3'), covar=tensor([0.1237, 0.0818, 0.0503, 0.0726, 0.1417, 0.0709, 0.0628, 0.0759], device='cuda:3'), in_proj_covar=tensor([0.0624, 0.0541, 0.0388, 0.0564, 0.0761, 0.0560, 0.0771, 0.0585], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 15:59:38,350 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=80013.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 15:59:46,557 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=80020.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:00:00,053 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=80031.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 16:00:03,329 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6536, 5.1142, 5.0980, 5.1530, 4.6260, 5.0206, 4.4721, 5.0135], device='cuda:3'), covar=tensor([0.0248, 0.0286, 0.0197, 0.0451, 0.0399, 0.0230, 0.1056, 0.0297], device='cuda:3'), in_proj_covar=tensor([0.0217, 0.0262, 0.0255, 0.0330, 0.0272, 0.0268, 0.0306, 0.0260], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 16:00:05,710 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3742, 5.3669, 5.0151, 5.2647, 5.2833, 4.7630, 5.2236, 4.9681], device='cuda:3'), covar=tensor([0.0474, 0.0438, 0.1258, 0.0823, 0.0627, 0.0391, 0.0436, 0.1123], device='cuda:3'), in_proj_covar=tensor([0.0500, 0.0564, 0.0703, 0.0443, 0.0457, 0.0512, 0.0548, 0.0683], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 16:00:06,743 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=80037.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:00:19,377 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9408, 3.5204, 5.0761, 2.8239, 4.4798, 2.6588, 3.0606, 1.8285], device='cuda:3'), covar=tensor([0.1186, 0.1107, 0.0144, 0.1019, 0.0487, 0.2591, 0.2786, 0.2314], device='cuda:3'), in_proj_covar=tensor([0.0222, 0.0246, 0.0200, 0.0200, 0.0258, 0.0271, 0.0322, 0.0237], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 16:00:20,051 INFO [train.py:898] (3/4) Epoch 23, batch 100, loss[loss=0.161, simple_loss=0.2594, pruned_loss=0.03128, over 18347.00 frames. ], tot_loss[loss=0.1642, simple_loss=0.2542, pruned_loss=0.03715, over 1430864.86 frames. ], batch size: 55, lr: 4.95e-03, grad_scale: 8.0 2023-03-09 16:00:42,468 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=80068.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:00:56,572 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=80079.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 16:01:02,384 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1848, 3.6090, 5.1486, 2.9491, 4.4305, 2.7082, 3.3050, 1.9822], device='cuda:3'), covar=tensor([0.1025, 0.1048, 0.0165, 0.0982, 0.0632, 0.2559, 0.2572, 0.2158], device='cuda:3'), in_proj_covar=tensor([0.0223, 0.0247, 0.0201, 0.0201, 0.0260, 0.0273, 0.0324, 0.0238], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 16:01:02,970 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.638e+02 2.619e+02 2.981e+02 3.602e+02 9.245e+02, threshold=5.963e+02, percent-clipped=1.0 2023-03-09 16:01:04,914 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-09 16:01:15,061 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.70 vs. limit=5.0 2023-03-09 16:01:18,865 INFO [train.py:898] (3/4) Epoch 23, batch 150, loss[loss=0.1576, simple_loss=0.238, pruned_loss=0.03864, over 18492.00 frames. ], tot_loss[loss=0.1632, simple_loss=0.2527, pruned_loss=0.03686, over 1909206.16 frames. ], batch size: 44, lr: 4.95e-03, grad_scale: 8.0 2023-03-09 16:01:40,170 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=80118.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:02:14,164 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5584, 2.2696, 2.5337, 2.4770, 2.9834, 4.3539, 4.2586, 3.1821], device='cuda:3'), covar=tensor([0.1955, 0.2562, 0.2852, 0.2035, 0.2540, 0.0347, 0.0485, 0.1000], device='cuda:3'), in_proj_covar=tensor([0.0308, 0.0351, 0.0388, 0.0281, 0.0390, 0.0251, 0.0298, 0.0259], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 16:02:17,095 INFO [train.py:898] (3/4) Epoch 23, batch 200, loss[loss=0.162, simple_loss=0.2613, pruned_loss=0.03137, over 18322.00 frames. ], tot_loss[loss=0.1632, simple_loss=0.2529, pruned_loss=0.03673, over 2280435.13 frames. ], batch size: 54, lr: 4.95e-03, grad_scale: 8.0 2023-03-09 16:02:59,527 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.861e+02 2.844e+02 3.404e+02 4.018e+02 9.589e+02, threshold=6.808e+02, percent-clipped=5.0 2023-03-09 16:03:15,925 INFO [train.py:898] (3/4) Epoch 23, batch 250, loss[loss=0.1442, simple_loss=0.2261, pruned_loss=0.03112, over 18489.00 frames. ], tot_loss[loss=0.1616, simple_loss=0.2511, pruned_loss=0.03609, over 2570600.50 frames. ], batch size: 44, lr: 4.94e-03, grad_scale: 8.0 2023-03-09 16:03:35,071 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-09 16:03:46,230 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.93 vs. limit=5.0 2023-03-09 16:04:05,530 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=80241.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:04:14,924 INFO [train.py:898] (3/4) Epoch 23, batch 300, loss[loss=0.1472, simple_loss=0.2319, pruned_loss=0.03127, over 18429.00 frames. ], tot_loss[loss=0.1616, simple_loss=0.2511, pruned_loss=0.03604, over 2793890.52 frames. ], batch size: 43, lr: 4.94e-03, grad_scale: 8.0 2023-03-09 16:04:25,752 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.69 vs. limit=2.0 2023-03-09 16:04:53,027 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=80283.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:04:54,787 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.957e+02 2.544e+02 3.151e+02 3.664e+02 8.600e+02, threshold=6.302e+02, percent-clipped=1.0 2023-03-09 16:05:00,697 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=80289.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:05:12,209 INFO [train.py:898] (3/4) Epoch 23, batch 350, loss[loss=0.1498, simple_loss=0.2482, pruned_loss=0.02569, over 18380.00 frames. ], tot_loss[loss=0.162, simple_loss=0.2518, pruned_loss=0.03615, over 2973604.94 frames. ], batch size: 55, lr: 4.94e-03, grad_scale: 8.0 2023-03-09 16:05:27,909 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=80313.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:06:04,366 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=80344.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:06:09,758 INFO [train.py:898] (3/4) Epoch 23, batch 400, loss[loss=0.1441, simple_loss=0.2308, pruned_loss=0.02876, over 18419.00 frames. ], tot_loss[loss=0.1621, simple_loss=0.2519, pruned_loss=0.03612, over 3114071.16 frames. ], batch size: 48, lr: 4.94e-03, grad_scale: 8.0 2023-03-09 16:06:17,496 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-09 16:06:23,720 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=80361.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:06:32,042 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.77 vs. limit=2.0 2023-03-09 16:06:50,974 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.659e+02 2.745e+02 3.129e+02 3.783e+02 6.813e+02, threshold=6.257e+02, percent-clipped=3.0 2023-03-09 16:07:02,323 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=80393.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:07:08,711 INFO [train.py:898] (3/4) Epoch 23, batch 450, loss[loss=0.1594, simple_loss=0.2545, pruned_loss=0.03216, over 18578.00 frames. ], tot_loss[loss=0.1618, simple_loss=0.2518, pruned_loss=0.03587, over 3225177.15 frames. ], batch size: 54, lr: 4.94e-03, grad_scale: 8.0 2023-03-09 16:07:31,457 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=80418.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:08:07,203 INFO [train.py:898] (3/4) Epoch 23, batch 500, loss[loss=0.1782, simple_loss=0.267, pruned_loss=0.04475, over 17060.00 frames. ], tot_loss[loss=0.1618, simple_loss=0.2516, pruned_loss=0.03595, over 3303999.24 frames. ], batch size: 78, lr: 4.94e-03, grad_scale: 8.0 2023-03-09 16:08:13,207 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=80454.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:08:23,796 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=80463.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:08:26,786 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=80466.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:08:47,903 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.857e+02 2.536e+02 3.027e+02 3.548e+02 8.560e+02, threshold=6.054e+02, percent-clipped=2.0 2023-03-09 16:08:58,982 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=80494.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:09:05,413 INFO [train.py:898] (3/4) Epoch 23, batch 550, loss[loss=0.184, simple_loss=0.2735, pruned_loss=0.04729, over 17017.00 frames. ], tot_loss[loss=0.162, simple_loss=0.2519, pruned_loss=0.03607, over 3359082.16 frames. ], batch size: 78, lr: 4.93e-03, grad_scale: 8.0 2023-03-09 16:09:06,897 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6576, 3.4942, 2.3431, 4.5021, 3.1739, 4.2971, 2.6983, 4.0877], device='cuda:3'), covar=tensor([0.0607, 0.0782, 0.1407, 0.0418, 0.0807, 0.0352, 0.1075, 0.0364], device='cuda:3'), in_proj_covar=tensor([0.0216, 0.0226, 0.0190, 0.0286, 0.0194, 0.0265, 0.0204, 0.0201], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 16:09:34,050 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=80524.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:09:52,725 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.83 vs. limit=5.0 2023-03-09 16:10:02,835 INFO [train.py:898] (3/4) Epoch 23, batch 600, loss[loss=0.1629, simple_loss=0.2623, pruned_loss=0.03175, over 18316.00 frames. ], tot_loss[loss=0.1617, simple_loss=0.2519, pruned_loss=0.03572, over 3418655.63 frames. ], batch size: 57, lr: 4.93e-03, grad_scale: 8.0 2023-03-09 16:10:10,541 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=80555.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:10:44,261 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.750e+02 2.691e+02 3.309e+02 4.060e+02 6.105e+02, threshold=6.618e+02, percent-clipped=2.0 2023-03-09 16:10:59,883 INFO [train.py:898] (3/4) Epoch 23, batch 650, loss[loss=0.1589, simple_loss=0.2483, pruned_loss=0.03478, over 18377.00 frames. ], tot_loss[loss=0.1607, simple_loss=0.2506, pruned_loss=0.03544, over 3446668.83 frames. ], batch size: 50, lr: 4.93e-03, grad_scale: 8.0 2023-03-09 16:11:47,303 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=80639.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:11:58,521 INFO [train.py:898] (3/4) Epoch 23, batch 700, loss[loss=0.155, simple_loss=0.2439, pruned_loss=0.033, over 18284.00 frames. ], tot_loss[loss=0.161, simple_loss=0.2511, pruned_loss=0.03549, over 3484983.27 frames. ], batch size: 49, lr: 4.93e-03, grad_scale: 8.0 2023-03-09 16:12:41,894 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.765e+02 2.683e+02 3.069e+02 3.814e+02 7.668e+02, threshold=6.138e+02, percent-clipped=2.0 2023-03-09 16:12:57,647 INFO [train.py:898] (3/4) Epoch 23, batch 750, loss[loss=0.1557, simple_loss=0.2543, pruned_loss=0.0285, over 18320.00 frames. ], tot_loss[loss=0.1613, simple_loss=0.2515, pruned_loss=0.03556, over 3508449.69 frames. ], batch size: 54, lr: 4.93e-03, grad_scale: 8.0 2023-03-09 16:13:41,972 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=80736.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:13:56,516 INFO [train.py:898] (3/4) Epoch 23, batch 800, loss[loss=0.1518, simple_loss=0.2332, pruned_loss=0.03517, over 17771.00 frames. ], tot_loss[loss=0.1613, simple_loss=0.2512, pruned_loss=0.03567, over 3533790.60 frames. ], batch size: 39, lr: 4.93e-03, grad_scale: 8.0 2023-03-09 16:13:56,750 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=80749.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:14:25,191 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=80773.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:14:38,918 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.739e+02 2.483e+02 2.864e+02 3.599e+02 5.476e+02, threshold=5.727e+02, percent-clipped=0.0 2023-03-09 16:14:53,003 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=80797.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:14:54,772 INFO [train.py:898] (3/4) Epoch 23, batch 850, loss[loss=0.1363, simple_loss=0.2211, pruned_loss=0.02577, over 18471.00 frames. ], tot_loss[loss=0.1614, simple_loss=0.2515, pruned_loss=0.03566, over 3561758.72 frames. ], batch size: 43, lr: 4.93e-03, grad_scale: 8.0 2023-03-09 16:15:18,910 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=80819.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:15:36,729 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=80834.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:15:52,811 INFO [train.py:898] (3/4) Epoch 23, batch 900, loss[loss=0.1446, simple_loss=0.2267, pruned_loss=0.03129, over 18161.00 frames. ], tot_loss[loss=0.1617, simple_loss=0.2518, pruned_loss=0.03579, over 3554680.23 frames. ], batch size: 44, lr: 4.92e-03, grad_scale: 8.0 2023-03-09 16:15:54,149 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=80850.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:16:34,827 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.923e+02 2.539e+02 2.932e+02 3.679e+02 1.116e+03, threshold=5.864e+02, percent-clipped=6.0 2023-03-09 16:16:51,057 INFO [train.py:898] (3/4) Epoch 23, batch 950, loss[loss=0.1679, simple_loss=0.2636, pruned_loss=0.03613, over 18302.00 frames. ], tot_loss[loss=0.1614, simple_loss=0.2515, pruned_loss=0.03562, over 3566602.32 frames. ], batch size: 57, lr: 4.92e-03, grad_scale: 8.0 2023-03-09 16:17:37,191 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=80939.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:17:48,661 INFO [train.py:898] (3/4) Epoch 23, batch 1000, loss[loss=0.1714, simple_loss=0.2651, pruned_loss=0.03887, over 17380.00 frames. ], tot_loss[loss=0.1625, simple_loss=0.2526, pruned_loss=0.03618, over 3554800.46 frames. ], batch size: 78, lr: 4.92e-03, grad_scale: 8.0 2023-03-09 16:18:01,224 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7736, 5.2967, 5.2711, 5.3441, 4.7612, 5.2235, 4.5910, 5.1771], device='cuda:3'), covar=tensor([0.0261, 0.0300, 0.0204, 0.0362, 0.0435, 0.0236, 0.1070, 0.0295], device='cuda:3'), in_proj_covar=tensor([0.0218, 0.0262, 0.0256, 0.0333, 0.0273, 0.0269, 0.0306, 0.0262], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 16:18:09,571 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-09 16:18:30,653 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.676e+02 2.676e+02 3.165e+02 3.583e+02 7.202e+02, threshold=6.331e+02, percent-clipped=5.0 2023-03-09 16:18:33,150 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=80987.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:18:46,999 INFO [train.py:898] (3/4) Epoch 23, batch 1050, loss[loss=0.1718, simple_loss=0.2727, pruned_loss=0.03541, over 18486.00 frames. ], tot_loss[loss=0.1625, simple_loss=0.2528, pruned_loss=0.03607, over 3564049.31 frames. ], batch size: 53, lr: 4.92e-03, grad_scale: 8.0 2023-03-09 16:19:45,394 INFO [train.py:898] (3/4) Epoch 23, batch 1100, loss[loss=0.1819, simple_loss=0.2718, pruned_loss=0.04597, over 18257.00 frames. ], tot_loss[loss=0.1624, simple_loss=0.2527, pruned_loss=0.03606, over 3567391.66 frames. ], batch size: 60, lr: 4.92e-03, grad_scale: 8.0 2023-03-09 16:19:45,671 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81049.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:20:03,728 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-09 16:20:27,981 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.813e+02 2.500e+02 2.967e+02 3.515e+02 7.145e+02, threshold=5.934e+02, percent-clipped=1.0 2023-03-09 16:20:35,939 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=81092.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:20:41,602 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=81097.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:20:43,684 INFO [train.py:898] (3/4) Epoch 23, batch 1150, loss[loss=0.1792, simple_loss=0.2699, pruned_loss=0.04428, over 15970.00 frames. ], tot_loss[loss=0.1619, simple_loss=0.2519, pruned_loss=0.03597, over 3574598.11 frames. ], batch size: 94, lr: 4.92e-03, grad_scale: 8.0 2023-03-09 16:21:03,407 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=81116.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:21:06,639 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81119.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:21:18,139 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=81129.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:21:41,475 INFO [train.py:898] (3/4) Epoch 23, batch 1200, loss[loss=0.1645, simple_loss=0.2618, pruned_loss=0.03356, over 18360.00 frames. ], tot_loss[loss=0.1616, simple_loss=0.2519, pruned_loss=0.03566, over 3583743.50 frames. ], batch size: 55, lr: 4.91e-03, grad_scale: 8.0 2023-03-09 16:21:42,936 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81150.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:22:02,582 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=81167.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:22:13,719 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=81177.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:22:18,694 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3861, 5.8846, 5.5291, 5.6412, 5.4608, 5.3285, 5.9211, 5.8983], device='cuda:3'), covar=tensor([0.1285, 0.0764, 0.0507, 0.0802, 0.1687, 0.0821, 0.0722, 0.0746], device='cuda:3'), in_proj_covar=tensor([0.0620, 0.0536, 0.0385, 0.0565, 0.0762, 0.0559, 0.0773, 0.0583], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 16:22:22,858 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.778e+02 2.664e+02 3.163e+02 3.705e+02 6.920e+02, threshold=6.326e+02, percent-clipped=3.0 2023-03-09 16:22:29,082 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8320, 3.0571, 4.4784, 3.7065, 2.8261, 4.6772, 4.0143, 2.7969], device='cuda:3'), covar=tensor([0.0424, 0.1306, 0.0269, 0.0474, 0.1434, 0.0231, 0.0503, 0.1046], device='cuda:3'), in_proj_covar=tensor([0.0211, 0.0236, 0.0212, 0.0163, 0.0223, 0.0208, 0.0246, 0.0194], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 16:22:39,003 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=81198.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:22:39,947 INFO [train.py:898] (3/4) Epoch 23, batch 1250, loss[loss=0.1642, simple_loss=0.2574, pruned_loss=0.0355, over 18571.00 frames. ], tot_loss[loss=0.1615, simple_loss=0.2516, pruned_loss=0.03563, over 3585671.23 frames. ], batch size: 54, lr: 4.91e-03, grad_scale: 8.0 2023-03-09 16:23:01,660 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-09 16:23:37,482 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6918, 3.6698, 3.4916, 3.2142, 3.4333, 2.8676, 2.7725, 3.6920], device='cuda:3'), covar=tensor([0.0067, 0.0096, 0.0093, 0.0135, 0.0106, 0.0182, 0.0205, 0.0073], device='cuda:3'), in_proj_covar=tensor([0.0144, 0.0164, 0.0137, 0.0192, 0.0145, 0.0182, 0.0185, 0.0124], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 16:23:39,307 INFO [train.py:898] (3/4) Epoch 23, batch 1300, loss[loss=0.1444, simple_loss=0.2204, pruned_loss=0.03413, over 17764.00 frames. ], tot_loss[loss=0.1613, simple_loss=0.2515, pruned_loss=0.0356, over 3592963.63 frames. ], batch size: 39, lr: 4.91e-03, grad_scale: 8.0 2023-03-09 16:24:00,733 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0164, 5.4630, 2.7984, 5.3390, 5.1918, 5.5000, 5.3059, 2.9292], device='cuda:3'), covar=tensor([0.0208, 0.0073, 0.0788, 0.0062, 0.0078, 0.0080, 0.0090, 0.0873], device='cuda:3'), in_proj_covar=tensor([0.0089, 0.0080, 0.0095, 0.0096, 0.0086, 0.0077, 0.0085, 0.0097], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 16:24:22,196 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.771e+02 2.557e+02 2.964e+02 3.868e+02 7.836e+02, threshold=5.928e+02, percent-clipped=1.0 2023-03-09 16:24:22,521 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5923, 6.1153, 5.7184, 5.9102, 5.7786, 5.5913, 6.2099, 6.1547], device='cuda:3'), covar=tensor([0.1134, 0.0700, 0.0416, 0.0662, 0.1339, 0.0735, 0.0573, 0.0686], device='cuda:3'), in_proj_covar=tensor([0.0616, 0.0533, 0.0382, 0.0563, 0.0759, 0.0555, 0.0766, 0.0580], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 16:24:37,826 INFO [train.py:898] (3/4) Epoch 23, batch 1350, loss[loss=0.1642, simple_loss=0.2527, pruned_loss=0.0379, over 18386.00 frames. ], tot_loss[loss=0.1609, simple_loss=0.251, pruned_loss=0.0354, over 3604338.56 frames. ], batch size: 46, lr: 4.91e-03, grad_scale: 4.0 2023-03-09 16:24:38,231 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=81299.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:24:40,503 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=81301.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:25:36,790 INFO [train.py:898] (3/4) Epoch 23, batch 1400, loss[loss=0.1659, simple_loss=0.2631, pruned_loss=0.03428, over 18616.00 frames. ], tot_loss[loss=0.1598, simple_loss=0.2496, pruned_loss=0.03497, over 3610580.79 frames. ], batch size: 52, lr: 4.91e-03, grad_scale: 4.0 2023-03-09 16:25:46,061 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9179, 5.3737, 5.3865, 5.4017, 4.8347, 5.2657, 4.6824, 5.2769], device='cuda:3'), covar=tensor([0.0239, 0.0269, 0.0177, 0.0395, 0.0423, 0.0231, 0.1018, 0.0294], device='cuda:3'), in_proj_covar=tensor([0.0221, 0.0265, 0.0259, 0.0336, 0.0276, 0.0272, 0.0311, 0.0263], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 16:25:49,607 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=81360.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:25:51,933 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=81362.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:26:19,228 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.872e+02 2.620e+02 3.077e+02 3.710e+02 7.565e+02, threshold=6.154e+02, percent-clipped=6.0 2023-03-09 16:26:26,926 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81392.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:26:34,017 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.66 vs. limit=2.0 2023-03-09 16:26:35,549 INFO [train.py:898] (3/4) Epoch 23, batch 1450, loss[loss=0.1982, simple_loss=0.2882, pruned_loss=0.05411, over 18295.00 frames. ], tot_loss[loss=0.1606, simple_loss=0.2506, pruned_loss=0.03527, over 3602263.36 frames. ], batch size: 57, lr: 4.91e-03, grad_scale: 4.0 2023-03-09 16:26:50,810 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5235, 5.5186, 5.1166, 5.4542, 5.4407, 4.7988, 5.3508, 5.1031], device='cuda:3'), covar=tensor([0.0420, 0.0403, 0.1455, 0.0746, 0.0583, 0.0442, 0.0472, 0.0965], device='cuda:3'), in_proj_covar=tensor([0.0494, 0.0567, 0.0702, 0.0438, 0.0457, 0.0517, 0.0544, 0.0680], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 16:27:08,910 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.57 vs. limit=2.0 2023-03-09 16:27:10,846 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81429.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:27:23,602 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=81440.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:27:24,928 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6784, 3.6820, 3.4854, 3.1877, 3.4324, 2.8590, 2.8630, 3.6634], device='cuda:3'), covar=tensor([0.0061, 0.0087, 0.0090, 0.0127, 0.0100, 0.0196, 0.0196, 0.0076], device='cuda:3'), in_proj_covar=tensor([0.0143, 0.0164, 0.0137, 0.0191, 0.0145, 0.0181, 0.0185, 0.0124], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 16:27:34,462 INFO [train.py:898] (3/4) Epoch 23, batch 1500, loss[loss=0.1609, simple_loss=0.2553, pruned_loss=0.03327, over 18406.00 frames. ], tot_loss[loss=0.1602, simple_loss=0.2503, pruned_loss=0.03508, over 3612702.91 frames. ], batch size: 52, lr: 4.91e-03, grad_scale: 4.0 2023-03-09 16:28:01,906 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=81472.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:28:07,570 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=81477.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:28:17,386 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.729e+02 2.620e+02 2.957e+02 3.344e+02 7.866e+02, threshold=5.914e+02, percent-clipped=3.0 2023-03-09 16:28:32,684 INFO [train.py:898] (3/4) Epoch 23, batch 1550, loss[loss=0.1639, simple_loss=0.2577, pruned_loss=0.03506, over 18062.00 frames. ], tot_loss[loss=0.1603, simple_loss=0.2504, pruned_loss=0.03509, over 3606279.15 frames. ], batch size: 65, lr: 4.90e-03, grad_scale: 4.0 2023-03-09 16:29:31,031 INFO [train.py:898] (3/4) Epoch 23, batch 1600, loss[loss=0.1694, simple_loss=0.2603, pruned_loss=0.03923, over 18126.00 frames. ], tot_loss[loss=0.161, simple_loss=0.2509, pruned_loss=0.03561, over 3593179.06 frames. ], batch size: 62, lr: 4.90e-03, grad_scale: 8.0 2023-03-09 16:30:15,102 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.871e+02 2.795e+02 3.306e+02 4.129e+02 9.714e+02, threshold=6.611e+02, percent-clipped=7.0 2023-03-09 16:30:29,170 INFO [train.py:898] (3/4) Epoch 23, batch 1650, loss[loss=0.1508, simple_loss=0.2447, pruned_loss=0.0285, over 18501.00 frames. ], tot_loss[loss=0.1614, simple_loss=0.2513, pruned_loss=0.03574, over 3578976.90 frames. ], batch size: 47, lr: 4.90e-03, grad_scale: 4.0 2023-03-09 16:30:29,569 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5504, 3.9193, 3.8616, 3.1711, 3.4035, 3.2291, 2.4817, 2.3850], device='cuda:3'), covar=tensor([0.0260, 0.0184, 0.0107, 0.0310, 0.0348, 0.0242, 0.0732, 0.0861], device='cuda:3'), in_proj_covar=tensor([0.0073, 0.0061, 0.0065, 0.0070, 0.0091, 0.0068, 0.0078, 0.0086], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 16:31:16,862 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6998, 2.4681, 2.7095, 2.7251, 3.3153, 4.8696, 4.8490, 3.3413], device='cuda:3'), covar=tensor([0.1939, 0.2376, 0.3014, 0.1895, 0.2360, 0.0243, 0.0325, 0.1010], device='cuda:3'), in_proj_covar=tensor([0.0309, 0.0350, 0.0388, 0.0282, 0.0392, 0.0251, 0.0298, 0.0260], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 16:31:21,481 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.67 vs. limit=2.0 2023-03-09 16:31:28,194 INFO [train.py:898] (3/4) Epoch 23, batch 1700, loss[loss=0.1686, simple_loss=0.2615, pruned_loss=0.03784, over 18253.00 frames. ], tot_loss[loss=0.1614, simple_loss=0.2514, pruned_loss=0.03568, over 3575621.25 frames. ], batch size: 60, lr: 4.90e-03, grad_scale: 4.0 2023-03-09 16:31:35,764 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=81655.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:31:35,875 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4152, 5.4236, 5.0417, 5.3252, 5.3543, 4.7027, 5.2498, 5.0240], device='cuda:3'), covar=tensor([0.0453, 0.0464, 0.1341, 0.0853, 0.0626, 0.0440, 0.0446, 0.1114], device='cuda:3'), in_proj_covar=tensor([0.0498, 0.0566, 0.0702, 0.0441, 0.0458, 0.0519, 0.0547, 0.0681], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0004, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 16:31:38,671 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=81657.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:31:42,220 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3202, 5.8386, 5.4852, 5.5646, 5.4786, 5.2792, 5.8847, 5.8548], device='cuda:3'), covar=tensor([0.1201, 0.0673, 0.0503, 0.0764, 0.1274, 0.0727, 0.0580, 0.0640], device='cuda:3'), in_proj_covar=tensor([0.0617, 0.0532, 0.0384, 0.0563, 0.0759, 0.0555, 0.0765, 0.0581], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 16:31:53,574 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=81670.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:32:13,034 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.642e+02 2.453e+02 2.824e+02 3.386e+02 8.042e+02, threshold=5.649e+02, percent-clipped=1.0 2023-03-09 16:32:15,916 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1925, 4.3050, 2.5148, 4.1976, 5.3797, 2.6615, 4.0175, 4.1643], device='cuda:3'), covar=tensor([0.0151, 0.1390, 0.1630, 0.0641, 0.0077, 0.1261, 0.0591, 0.0684], device='cuda:3'), in_proj_covar=tensor([0.0174, 0.0277, 0.0208, 0.0201, 0.0134, 0.0187, 0.0219, 0.0229], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 16:32:26,757 INFO [train.py:898] (3/4) Epoch 23, batch 1750, loss[loss=0.1585, simple_loss=0.2547, pruned_loss=0.03119, over 18357.00 frames. ], tot_loss[loss=0.1616, simple_loss=0.2518, pruned_loss=0.03572, over 3580785.61 frames. ], batch size: 55, lr: 4.90e-03, grad_scale: 4.0 2023-03-09 16:33:04,553 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=81731.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 16:33:25,097 INFO [train.py:898] (3/4) Epoch 23, batch 1800, loss[loss=0.1497, simple_loss=0.2444, pruned_loss=0.02754, over 18366.00 frames. ], tot_loss[loss=0.1616, simple_loss=0.2515, pruned_loss=0.03581, over 3587388.97 frames. ], batch size: 50, lr: 4.90e-03, grad_scale: 2.0 2023-03-09 16:33:52,223 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81772.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:34:10,548 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.038e+02 2.659e+02 3.003e+02 3.499e+02 8.184e+02, threshold=6.005e+02, percent-clipped=4.0 2023-03-09 16:34:23,257 INFO [train.py:898] (3/4) Epoch 23, batch 1850, loss[loss=0.1528, simple_loss=0.2503, pruned_loss=0.02764, over 18483.00 frames. ], tot_loss[loss=0.1615, simple_loss=0.2515, pruned_loss=0.03579, over 3589331.75 frames. ], batch size: 51, lr: 4.90e-03, grad_scale: 2.0 2023-03-09 16:34:41,672 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2699, 5.2245, 5.4779, 5.4738, 5.0925, 5.9798, 5.6666, 5.2225], device='cuda:3'), covar=tensor([0.1104, 0.0619, 0.0729, 0.0736, 0.1319, 0.0749, 0.0709, 0.1642], device='cuda:3'), in_proj_covar=tensor([0.0364, 0.0290, 0.0316, 0.0318, 0.0333, 0.0433, 0.0286, 0.0423], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 16:34:42,288 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-09 16:34:48,461 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=81820.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:35:07,346 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-09 16:35:11,523 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6239, 3.3047, 2.2260, 4.3038, 3.1104, 4.0777, 2.4974, 3.8178], device='cuda:3'), covar=tensor([0.0660, 0.0958, 0.1568, 0.0559, 0.0875, 0.0387, 0.1297, 0.0438], device='cuda:3'), in_proj_covar=tensor([0.0215, 0.0228, 0.0191, 0.0287, 0.0194, 0.0268, 0.0205, 0.0203], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 16:35:19,261 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9365, 5.4084, 2.8406, 5.2612, 5.1300, 5.4834, 5.3493, 2.9696], device='cuda:3'), covar=tensor([0.0247, 0.0072, 0.0779, 0.0072, 0.0078, 0.0075, 0.0077, 0.0910], device='cuda:3'), in_proj_covar=tensor([0.0088, 0.0081, 0.0095, 0.0096, 0.0086, 0.0077, 0.0085, 0.0096], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 16:35:21,112 INFO [train.py:898] (3/4) Epoch 23, batch 1900, loss[loss=0.144, simple_loss=0.2267, pruned_loss=0.03061, over 18400.00 frames. ], tot_loss[loss=0.1615, simple_loss=0.2513, pruned_loss=0.03587, over 3588384.62 frames. ], batch size: 42, lr: 4.89e-03, grad_scale: 2.0 2023-03-09 16:35:26,951 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3408, 5.2945, 5.6214, 5.6964, 5.1855, 6.1140, 5.8346, 5.3809], device='cuda:3'), covar=tensor([0.1073, 0.0522, 0.0708, 0.0662, 0.1343, 0.0696, 0.0533, 0.1624], device='cuda:3'), in_proj_covar=tensor([0.0366, 0.0291, 0.0316, 0.0319, 0.0335, 0.0433, 0.0288, 0.0425], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 16:36:07,198 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.831e+02 2.605e+02 3.105e+02 3.631e+02 5.941e+02, threshold=6.209e+02, percent-clipped=0.0 2023-03-09 16:36:20,013 INFO [train.py:898] (3/4) Epoch 23, batch 1950, loss[loss=0.1653, simple_loss=0.2605, pruned_loss=0.03502, over 18311.00 frames. ], tot_loss[loss=0.1615, simple_loss=0.2514, pruned_loss=0.03578, over 3597718.19 frames. ], batch size: 54, lr: 4.89e-03, grad_scale: 2.0 2023-03-09 16:36:41,391 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9477, 4.6188, 4.6693, 3.5207, 3.8083, 3.5853, 2.9400, 2.6615], device='cuda:3'), covar=tensor([0.0215, 0.0160, 0.0071, 0.0307, 0.0330, 0.0236, 0.0654, 0.0857], device='cuda:3'), in_proj_covar=tensor([0.0073, 0.0062, 0.0065, 0.0070, 0.0091, 0.0068, 0.0078, 0.0086], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 16:37:18,004 INFO [train.py:898] (3/4) Epoch 23, batch 2000, loss[loss=0.1625, simple_loss=0.254, pruned_loss=0.03552, over 18380.00 frames. ], tot_loss[loss=0.1618, simple_loss=0.2519, pruned_loss=0.03586, over 3596732.02 frames. ], batch size: 50, lr: 4.89e-03, grad_scale: 4.0 2023-03-09 16:37:25,076 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81955.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:37:27,420 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81957.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:38:03,392 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=81987.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:38:04,119 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.810e+02 2.510e+02 2.896e+02 3.401e+02 7.542e+02, threshold=5.791e+02, percent-clipped=1.0 2023-03-09 16:38:17,073 INFO [train.py:898] (3/4) Epoch 23, batch 2050, loss[loss=0.1579, simple_loss=0.2522, pruned_loss=0.0318, over 18296.00 frames. ], tot_loss[loss=0.1615, simple_loss=0.2514, pruned_loss=0.03576, over 3585889.55 frames. ], batch size: 49, lr: 4.89e-03, grad_scale: 4.0 2023-03-09 16:38:26,641 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=82003.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:38:28,882 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=82005.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:38:54,184 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=82026.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 16:39:19,058 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=82048.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:39:19,763 INFO [train.py:898] (3/4) Epoch 23, batch 2100, loss[loss=0.1515, simple_loss=0.2477, pruned_loss=0.02761, over 18481.00 frames. ], tot_loss[loss=0.1617, simple_loss=0.2515, pruned_loss=0.03601, over 3575962.12 frames. ], batch size: 51, lr: 4.89e-03, grad_scale: 4.0 2023-03-09 16:39:27,621 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=82055.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:40:05,448 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.125e+02 2.645e+02 3.113e+02 3.776e+02 5.395e+02, threshold=6.226e+02, percent-clipped=0.0 2023-03-09 16:40:05,777 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5260, 5.5285, 4.9907, 5.4498, 5.5218, 4.9586, 5.4072, 5.0292], device='cuda:3'), covar=tensor([0.0553, 0.0579, 0.1843, 0.0997, 0.0660, 0.0449, 0.0560, 0.1226], device='cuda:3'), in_proj_covar=tensor([0.0504, 0.0569, 0.0712, 0.0441, 0.0460, 0.0519, 0.0549, 0.0688], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 16:40:18,011 INFO [train.py:898] (3/4) Epoch 23, batch 2150, loss[loss=0.16, simple_loss=0.2506, pruned_loss=0.03471, over 18376.00 frames. ], tot_loss[loss=0.1616, simple_loss=0.2517, pruned_loss=0.03572, over 3586310.33 frames. ], batch size: 50, lr: 4.89e-03, grad_scale: 4.0 2023-03-09 16:40:27,971 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7866, 5.3100, 5.2830, 5.3113, 4.7644, 5.2072, 4.6450, 5.1975], device='cuda:3'), covar=tensor([0.0256, 0.0302, 0.0206, 0.0442, 0.0439, 0.0252, 0.1123, 0.0329], device='cuda:3'), in_proj_covar=tensor([0.0219, 0.0263, 0.0259, 0.0334, 0.0274, 0.0270, 0.0308, 0.0263], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 16:40:38,348 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=82116.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:41:16,341 INFO [train.py:898] (3/4) Epoch 23, batch 2200, loss[loss=0.1518, simple_loss=0.2287, pruned_loss=0.03748, over 18560.00 frames. ], tot_loss[loss=0.1609, simple_loss=0.2511, pruned_loss=0.03534, over 3600318.72 frames. ], batch size: 45, lr: 4.88e-03, grad_scale: 4.0 2023-03-09 16:41:55,937 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9750, 3.9028, 5.3684, 3.0485, 4.7203, 2.7780, 3.2730, 2.0642], device='cuda:3'), covar=tensor([0.1137, 0.0919, 0.0114, 0.0932, 0.0444, 0.2535, 0.2658, 0.2135], device='cuda:3'), in_proj_covar=tensor([0.0226, 0.0249, 0.0206, 0.0203, 0.0262, 0.0277, 0.0329, 0.0242], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 16:41:57,312 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-09 16:42:00,324 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3512, 5.3446, 4.8385, 5.3111, 5.2719, 4.7496, 5.2153, 4.8893], device='cuda:3'), covar=tensor([0.0571, 0.0605, 0.1660, 0.0839, 0.0727, 0.0545, 0.0590, 0.1143], device='cuda:3'), in_proj_covar=tensor([0.0507, 0.0574, 0.0715, 0.0442, 0.0464, 0.0522, 0.0552, 0.0691], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 16:42:02,197 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.746e+02 2.684e+02 3.246e+02 4.259e+02 8.856e+02, threshold=6.492e+02, percent-clipped=7.0 2023-03-09 16:42:14,733 INFO [train.py:898] (3/4) Epoch 23, batch 2250, loss[loss=0.1521, simple_loss=0.2417, pruned_loss=0.03121, over 18271.00 frames. ], tot_loss[loss=0.1604, simple_loss=0.2504, pruned_loss=0.03524, over 3608517.77 frames. ], batch size: 47, lr: 4.88e-03, grad_scale: 4.0 2023-03-09 16:42:55,646 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.75 vs. limit=5.0 2023-03-09 16:43:13,070 INFO [train.py:898] (3/4) Epoch 23, batch 2300, loss[loss=0.1645, simple_loss=0.2527, pruned_loss=0.03811, over 18298.00 frames. ], tot_loss[loss=0.1611, simple_loss=0.251, pruned_loss=0.03555, over 3592080.91 frames. ], batch size: 49, lr: 4.88e-03, grad_scale: 4.0 2023-03-09 16:43:46,410 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-09 16:43:59,037 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.010e+02 2.750e+02 3.270e+02 3.982e+02 6.063e+02, threshold=6.540e+02, percent-clipped=0.0 2023-03-09 16:44:02,192 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7505, 3.0877, 4.2977, 3.6886, 2.6709, 4.5038, 3.8645, 2.7481], device='cuda:3'), covar=tensor([0.0521, 0.1296, 0.0304, 0.0491, 0.1683, 0.0216, 0.0653, 0.1089], device='cuda:3'), in_proj_covar=tensor([0.0217, 0.0243, 0.0220, 0.0169, 0.0230, 0.0215, 0.0253, 0.0199], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 16:44:09,475 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.74 vs. limit=2.0 2023-03-09 16:44:11,844 INFO [train.py:898] (3/4) Epoch 23, batch 2350, loss[loss=0.1585, simple_loss=0.2494, pruned_loss=0.03379, over 17114.00 frames. ], tot_loss[loss=0.1606, simple_loss=0.2507, pruned_loss=0.03529, over 3584680.12 frames. ], batch size: 78, lr: 4.88e-03, grad_scale: 4.0 2023-03-09 16:44:43,752 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=82326.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:45:04,054 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=82343.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:45:07,648 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=82346.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:45:10,707 INFO [train.py:898] (3/4) Epoch 23, batch 2400, loss[loss=0.1532, simple_loss=0.2472, pruned_loss=0.02956, over 18348.00 frames. ], tot_loss[loss=0.1596, simple_loss=0.2495, pruned_loss=0.03486, over 3587039.22 frames. ], batch size: 46, lr: 4.88e-03, grad_scale: 8.0 2023-03-09 16:45:38,989 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.77 vs. limit=5.0 2023-03-09 16:45:39,527 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=82374.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:45:52,506 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9320, 4.6507, 4.7609, 3.6165, 3.9295, 3.5429, 2.9736, 2.6601], device='cuda:3'), covar=tensor([0.0236, 0.0169, 0.0072, 0.0269, 0.0327, 0.0243, 0.0637, 0.0845], device='cuda:3'), in_proj_covar=tensor([0.0073, 0.0061, 0.0064, 0.0069, 0.0091, 0.0068, 0.0078, 0.0086], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 16:45:55,566 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.666e+02 2.631e+02 3.119e+02 3.737e+02 9.140e+02, threshold=6.238e+02, percent-clipped=2.0 2023-03-09 16:45:57,589 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.51 vs. limit=5.0 2023-03-09 16:46:09,306 INFO [train.py:898] (3/4) Epoch 23, batch 2450, loss[loss=0.1358, simple_loss=0.2237, pruned_loss=0.02395, over 18513.00 frames. ], tot_loss[loss=0.1594, simple_loss=0.2494, pruned_loss=0.03469, over 3589779.70 frames. ], batch size: 44, lr: 4.88e-03, grad_scale: 8.0 2023-03-09 16:46:18,924 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=82407.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:46:23,289 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=82411.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:46:51,583 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7468, 4.2856, 4.2318, 3.3230, 3.6975, 3.4081, 2.6535, 2.2885], device='cuda:3'), covar=tensor([0.0224, 0.0141, 0.0093, 0.0289, 0.0304, 0.0222, 0.0657, 0.0871], device='cuda:3'), in_proj_covar=tensor([0.0072, 0.0061, 0.0064, 0.0069, 0.0090, 0.0067, 0.0078, 0.0086], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 16:47:08,007 INFO [train.py:898] (3/4) Epoch 23, batch 2500, loss[loss=0.1554, simple_loss=0.2354, pruned_loss=0.03772, over 18405.00 frames. ], tot_loss[loss=0.1602, simple_loss=0.2504, pruned_loss=0.03503, over 3592100.39 frames. ], batch size: 42, lr: 4.88e-03, grad_scale: 8.0 2023-03-09 16:47:23,964 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3693, 5.8783, 5.4912, 5.7054, 5.4772, 5.3446, 5.9554, 5.8931], device='cuda:3'), covar=tensor([0.1193, 0.0748, 0.0493, 0.0681, 0.1459, 0.0696, 0.0549, 0.0700], device='cuda:3'), in_proj_covar=tensor([0.0621, 0.0543, 0.0386, 0.0566, 0.0766, 0.0557, 0.0774, 0.0585], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 16:47:52,903 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.741e+02 2.534e+02 3.083e+02 3.529e+02 5.828e+02, threshold=6.166e+02, percent-clipped=0.0 2023-03-09 16:47:58,866 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=82493.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 16:48:01,716 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9533, 3.6864, 5.0685, 3.0726, 4.4631, 2.6157, 3.1554, 1.8064], device='cuda:3'), covar=tensor([0.1183, 0.0978, 0.0165, 0.0946, 0.0468, 0.2544, 0.2696, 0.2261], device='cuda:3'), in_proj_covar=tensor([0.0226, 0.0250, 0.0207, 0.0203, 0.0263, 0.0276, 0.0330, 0.0242], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 16:48:06,036 INFO [train.py:898] (3/4) Epoch 23, batch 2550, loss[loss=0.1457, simple_loss=0.2295, pruned_loss=0.03097, over 18481.00 frames. ], tot_loss[loss=0.1608, simple_loss=0.2512, pruned_loss=0.03523, over 3593724.68 frames. ], batch size: 44, lr: 4.87e-03, grad_scale: 8.0 2023-03-09 16:48:11,725 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9747, 4.5589, 4.6650, 3.5564, 3.9189, 3.6540, 2.7867, 2.6555], device='cuda:3'), covar=tensor([0.0218, 0.0184, 0.0070, 0.0271, 0.0299, 0.0218, 0.0672, 0.0847], device='cuda:3'), in_proj_covar=tensor([0.0072, 0.0062, 0.0064, 0.0069, 0.0090, 0.0068, 0.0078, 0.0086], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 16:49:01,156 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7554, 2.8738, 2.6684, 2.9066, 3.7256, 3.7702, 3.2991, 3.0419], device='cuda:3'), covar=tensor([0.0171, 0.0293, 0.0533, 0.0394, 0.0169, 0.0138, 0.0345, 0.0392], device='cuda:3'), in_proj_covar=tensor([0.0139, 0.0139, 0.0163, 0.0160, 0.0135, 0.0119, 0.0155, 0.0160], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 16:49:03,669 INFO [train.py:898] (3/4) Epoch 23, batch 2600, loss[loss=0.1337, simple_loss=0.2167, pruned_loss=0.02529, over 17670.00 frames. ], tot_loss[loss=0.1607, simple_loss=0.2511, pruned_loss=0.03516, over 3588227.85 frames. ], batch size: 39, lr: 4.87e-03, grad_scale: 8.0 2023-03-09 16:49:10,245 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=82554.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 16:49:27,096 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=82569.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:49:49,708 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.492e+02 2.620e+02 3.051e+02 3.714e+02 9.693e+02, threshold=6.103e+02, percent-clipped=7.0 2023-03-09 16:49:58,794 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9732, 4.6683, 4.7068, 3.5693, 3.9064, 3.6304, 2.7908, 2.7409], device='cuda:3'), covar=tensor([0.0222, 0.0176, 0.0076, 0.0264, 0.0328, 0.0220, 0.0661, 0.0817], device='cuda:3'), in_proj_covar=tensor([0.0072, 0.0062, 0.0064, 0.0070, 0.0091, 0.0068, 0.0078, 0.0086], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 16:50:01,839 INFO [train.py:898] (3/4) Epoch 23, batch 2650, loss[loss=0.1405, simple_loss=0.2222, pruned_loss=0.02936, over 18469.00 frames. ], tot_loss[loss=0.1603, simple_loss=0.2503, pruned_loss=0.0351, over 3584599.82 frames. ], batch size: 44, lr: 4.87e-03, grad_scale: 4.0 2023-03-09 16:50:14,771 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7873, 4.2710, 4.2446, 3.2601, 3.6224, 3.4210, 2.5986, 2.2480], device='cuda:3'), covar=tensor([0.0219, 0.0176, 0.0095, 0.0308, 0.0351, 0.0223, 0.0680, 0.0955], device='cuda:3'), in_proj_covar=tensor([0.0073, 0.0062, 0.0065, 0.0070, 0.0091, 0.0068, 0.0078, 0.0086], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 16:50:36,630 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.62 vs. limit=2.0 2023-03-09 16:50:38,557 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=82630.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:50:54,192 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=82643.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:51:00,480 INFO [train.py:898] (3/4) Epoch 23, batch 2700, loss[loss=0.1843, simple_loss=0.2788, pruned_loss=0.0449, over 18483.00 frames. ], tot_loss[loss=0.1599, simple_loss=0.2499, pruned_loss=0.03493, over 3593648.10 frames. ], batch size: 53, lr: 4.87e-03, grad_scale: 4.0 2023-03-09 16:51:46,588 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.924e+02 2.721e+02 3.267e+02 3.832e+02 9.086e+02, threshold=6.534e+02, percent-clipped=2.0 2023-03-09 16:51:49,671 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=82691.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:51:58,554 INFO [train.py:898] (3/4) Epoch 23, batch 2750, loss[loss=0.1767, simple_loss=0.2702, pruned_loss=0.0416, over 18259.00 frames. ], tot_loss[loss=0.1605, simple_loss=0.2507, pruned_loss=0.0352, over 3597832.21 frames. ], batch size: 60, lr: 4.87e-03, grad_scale: 4.0 2023-03-09 16:51:58,966 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9201, 4.5317, 4.6132, 3.5674, 3.8340, 3.5828, 2.6754, 2.6324], device='cuda:3'), covar=tensor([0.0225, 0.0200, 0.0088, 0.0273, 0.0314, 0.0232, 0.0714, 0.0840], device='cuda:3'), in_proj_covar=tensor([0.0072, 0.0062, 0.0065, 0.0069, 0.0091, 0.0068, 0.0078, 0.0086], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 16:52:02,236 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=82702.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:52:12,911 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=82711.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:52:33,064 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9800, 5.0502, 5.0780, 4.8130, 4.8264, 4.8375, 5.1506, 5.1787], device='cuda:3'), covar=tensor([0.0060, 0.0071, 0.0051, 0.0114, 0.0051, 0.0173, 0.0083, 0.0089], device='cuda:3'), in_proj_covar=tensor([0.0097, 0.0071, 0.0076, 0.0095, 0.0077, 0.0106, 0.0089, 0.0088], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 16:52:55,994 INFO [train.py:898] (3/4) Epoch 23, batch 2800, loss[loss=0.1536, simple_loss=0.2405, pruned_loss=0.03331, over 18249.00 frames. ], tot_loss[loss=0.1607, simple_loss=0.2505, pruned_loss=0.03539, over 3592650.31 frames. ], batch size: 45, lr: 4.87e-03, grad_scale: 8.0 2023-03-09 16:53:08,588 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=82759.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:53:42,294 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.820e+02 2.582e+02 3.010e+02 3.495e+02 9.650e+02, threshold=6.020e+02, percent-clipped=2.0 2023-03-09 16:53:54,217 INFO [train.py:898] (3/4) Epoch 23, batch 2850, loss[loss=0.1495, simple_loss=0.2305, pruned_loss=0.03428, over 17707.00 frames. ], tot_loss[loss=0.1611, simple_loss=0.251, pruned_loss=0.03554, over 3590764.57 frames. ], batch size: 39, lr: 4.87e-03, grad_scale: 8.0 2023-03-09 16:54:43,331 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9485, 5.4131, 5.3842, 5.4368, 4.8735, 5.2904, 4.7442, 5.2616], device='cuda:3'), covar=tensor([0.0221, 0.0259, 0.0180, 0.0381, 0.0396, 0.0222, 0.0983, 0.0304], device='cuda:3'), in_proj_covar=tensor([0.0218, 0.0263, 0.0259, 0.0335, 0.0275, 0.0273, 0.0307, 0.0262], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 16:54:52,996 INFO [train.py:898] (3/4) Epoch 23, batch 2900, loss[loss=0.1662, simple_loss=0.2601, pruned_loss=0.03613, over 18497.00 frames. ], tot_loss[loss=0.161, simple_loss=0.2508, pruned_loss=0.03557, over 3580498.93 frames. ], batch size: 53, lr: 4.86e-03, grad_scale: 8.0 2023-03-09 16:54:53,205 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=82849.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 16:55:39,404 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.765e+02 2.643e+02 2.969e+02 3.593e+02 6.828e+02, threshold=5.939e+02, percent-clipped=2.0 2023-03-09 16:55:51,332 INFO [train.py:898] (3/4) Epoch 23, batch 2950, loss[loss=0.1839, simple_loss=0.2703, pruned_loss=0.04871, over 17826.00 frames. ], tot_loss[loss=0.161, simple_loss=0.251, pruned_loss=0.03556, over 3584321.84 frames. ], batch size: 70, lr: 4.86e-03, grad_scale: 8.0 2023-03-09 16:56:22,402 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=82925.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:56:42,507 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.11 vs. limit=5.0 2023-03-09 16:56:49,699 INFO [train.py:898] (3/4) Epoch 23, batch 3000, loss[loss=0.1678, simple_loss=0.2634, pruned_loss=0.03606, over 18477.00 frames. ], tot_loss[loss=0.1611, simple_loss=0.2509, pruned_loss=0.03562, over 3592237.54 frames. ], batch size: 59, lr: 4.86e-03, grad_scale: 8.0 2023-03-09 16:56:49,699 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 16:57:01,576 INFO [train.py:932] (3/4) Epoch 23, validation: loss=0.1503, simple_loss=0.2492, pruned_loss=0.02572, over 944034.00 frames. 2023-03-09 16:57:01,577 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 16:57:19,162 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=82963.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:57:48,598 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.819e+02 2.491e+02 2.993e+02 3.582e+02 9.758e+02, threshold=5.985e+02, percent-clipped=2.0 2023-03-09 16:58:00,492 INFO [train.py:898] (3/4) Epoch 23, batch 3050, loss[loss=0.1685, simple_loss=0.2542, pruned_loss=0.04144, over 17004.00 frames. ], tot_loss[loss=0.1611, simple_loss=0.2508, pruned_loss=0.03571, over 3592206.30 frames. ], batch size: 78, lr: 4.86e-03, grad_scale: 8.0 2023-03-09 16:58:04,578 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=83002.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:58:24,905 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9124, 2.9300, 2.1247, 3.3276, 2.5000, 2.9859, 2.3232, 2.8788], device='cuda:3'), covar=tensor([0.0563, 0.0730, 0.1151, 0.0531, 0.0746, 0.0288, 0.1004, 0.0470], device='cuda:3'), in_proj_covar=tensor([0.0217, 0.0228, 0.0191, 0.0288, 0.0193, 0.0268, 0.0204, 0.0203], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 16:58:30,303 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=83024.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:58:59,527 INFO [train.py:898] (3/4) Epoch 23, batch 3100, loss[loss=0.147, simple_loss=0.2245, pruned_loss=0.03469, over 17723.00 frames. ], tot_loss[loss=0.1606, simple_loss=0.2502, pruned_loss=0.03548, over 3585467.54 frames. ], batch size: 39, lr: 4.86e-03, grad_scale: 4.0 2023-03-09 16:59:00,893 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=83050.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 16:59:10,472 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0074, 5.1099, 5.1169, 4.8789, 4.9115, 4.8334, 5.2147, 5.2686], device='cuda:3'), covar=tensor([0.0058, 0.0055, 0.0052, 0.0111, 0.0054, 0.0159, 0.0076, 0.0089], device='cuda:3'), in_proj_covar=tensor([0.0096, 0.0070, 0.0076, 0.0094, 0.0076, 0.0105, 0.0088, 0.0087], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 16:59:40,675 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-09 16:59:45,251 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-09 16:59:46,643 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.874e+02 2.570e+02 3.037e+02 3.721e+02 1.351e+03, threshold=6.073e+02, percent-clipped=1.0 2023-03-09 16:59:57,765 INFO [train.py:898] (3/4) Epoch 23, batch 3150, loss[loss=0.152, simple_loss=0.2405, pruned_loss=0.03178, over 18262.00 frames. ], tot_loss[loss=0.16, simple_loss=0.2497, pruned_loss=0.03514, over 3591969.67 frames. ], batch size: 47, lr: 4.86e-03, grad_scale: 4.0 2023-03-09 17:00:55,291 INFO [train.py:898] (3/4) Epoch 23, batch 3200, loss[loss=0.1578, simple_loss=0.2483, pruned_loss=0.03363, over 18491.00 frames. ], tot_loss[loss=0.1605, simple_loss=0.2503, pruned_loss=0.03529, over 3583379.49 frames. ], batch size: 51, lr: 4.86e-03, grad_scale: 8.0 2023-03-09 17:00:55,553 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=83149.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 17:01:43,024 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.642e+02 2.631e+02 3.256e+02 4.142e+02 1.137e+03, threshold=6.512e+02, percent-clipped=5.0 2023-03-09 17:01:50,278 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=83196.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:01:51,141 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=83197.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 17:01:53,674 INFO [train.py:898] (3/4) Epoch 23, batch 3250, loss[loss=0.1662, simple_loss=0.2614, pruned_loss=0.0355, over 18372.00 frames. ], tot_loss[loss=0.1606, simple_loss=0.2505, pruned_loss=0.03536, over 3590943.49 frames. ], batch size: 55, lr: 4.85e-03, grad_scale: 8.0 2023-03-09 17:02:25,243 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=83225.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:02:51,889 INFO [train.py:898] (3/4) Epoch 23, batch 3300, loss[loss=0.1215, simple_loss=0.2021, pruned_loss=0.0205, over 18447.00 frames. ], tot_loss[loss=0.1607, simple_loss=0.2509, pruned_loss=0.03526, over 3595160.45 frames. ], batch size: 43, lr: 4.85e-03, grad_scale: 8.0 2023-03-09 17:03:02,601 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=83257.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:03:21,862 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=83273.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:03:34,492 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=83284.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:03:40,707 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.708e+02 2.590e+02 3.037e+02 3.666e+02 6.079e+02, threshold=6.073e+02, percent-clipped=0.0 2023-03-09 17:03:50,693 INFO [train.py:898] (3/4) Epoch 23, batch 3350, loss[loss=0.1582, simple_loss=0.244, pruned_loss=0.03617, over 18500.00 frames. ], tot_loss[loss=0.1604, simple_loss=0.2505, pruned_loss=0.03518, over 3590415.18 frames. ], batch size: 47, lr: 4.85e-03, grad_scale: 8.0 2023-03-09 17:04:14,303 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=83319.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:04:45,167 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=83345.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:04:49,311 INFO [train.py:898] (3/4) Epoch 23, batch 3400, loss[loss=0.1645, simple_loss=0.2652, pruned_loss=0.03193, over 16073.00 frames. ], tot_loss[loss=0.16, simple_loss=0.2501, pruned_loss=0.03493, over 3598729.79 frames. ], batch size: 94, lr: 4.85e-03, grad_scale: 8.0 2023-03-09 17:05:17,234 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.70 vs. limit=2.0 2023-03-09 17:05:32,460 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.03 vs. limit=5.0 2023-03-09 17:05:37,374 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.783e+02 2.666e+02 3.258e+02 4.159e+02 1.161e+03, threshold=6.516e+02, percent-clipped=10.0 2023-03-09 17:05:47,434 INFO [train.py:898] (3/4) Epoch 23, batch 3450, loss[loss=0.1474, simple_loss=0.2391, pruned_loss=0.0278, over 18510.00 frames. ], tot_loss[loss=0.1606, simple_loss=0.2505, pruned_loss=0.03535, over 3594164.48 frames. ], batch size: 47, lr: 4.85e-03, grad_scale: 8.0 2023-03-09 17:06:45,609 INFO [train.py:898] (3/4) Epoch 23, batch 3500, loss[loss=0.1451, simple_loss=0.2311, pruned_loss=0.02953, over 18373.00 frames. ], tot_loss[loss=0.1607, simple_loss=0.2506, pruned_loss=0.03537, over 3587867.34 frames. ], batch size: 46, lr: 4.85e-03, grad_scale: 8.0 2023-03-09 17:07:01,339 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3914, 3.1577, 1.9379, 4.2351, 2.9012, 3.6522, 2.0555, 3.4726], device='cuda:3'), covar=tensor([0.0617, 0.0913, 0.1616, 0.0496, 0.0872, 0.0334, 0.1519, 0.0536], device='cuda:3'), in_proj_covar=tensor([0.0217, 0.0228, 0.0190, 0.0290, 0.0194, 0.0268, 0.0204, 0.0203], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 17:07:31,227 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.962e+02 2.487e+02 3.071e+02 3.696e+02 9.873e+02, threshold=6.142e+02, percent-clipped=4.0 2023-03-09 17:07:41,354 INFO [train.py:898] (3/4) Epoch 23, batch 3550, loss[loss=0.1594, simple_loss=0.2555, pruned_loss=0.03171, over 17807.00 frames. ], tot_loss[loss=0.1615, simple_loss=0.2514, pruned_loss=0.03579, over 3575047.48 frames. ], batch size: 70, lr: 4.85e-03, grad_scale: 8.0 2023-03-09 17:07:47,276 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8112, 4.4800, 4.5298, 3.3916, 3.7401, 3.4076, 2.9044, 2.4839], device='cuda:3'), covar=tensor([0.0236, 0.0152, 0.0082, 0.0320, 0.0350, 0.0247, 0.0638, 0.0867], device='cuda:3'), in_proj_covar=tensor([0.0073, 0.0062, 0.0065, 0.0070, 0.0091, 0.0068, 0.0078, 0.0087], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 17:08:33,900 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-09 17:08:34,482 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9247, 4.6076, 4.6308, 3.4889, 3.8235, 3.5294, 2.8178, 2.4890], device='cuda:3'), covar=tensor([0.0210, 0.0134, 0.0078, 0.0292, 0.0310, 0.0229, 0.0684, 0.0878], device='cuda:3'), in_proj_covar=tensor([0.0073, 0.0062, 0.0065, 0.0070, 0.0091, 0.0068, 0.0078, 0.0086], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 17:08:35,274 INFO [train.py:898] (3/4) Epoch 23, batch 3600, loss[loss=0.1582, simple_loss=0.2383, pruned_loss=0.03901, over 18346.00 frames. ], tot_loss[loss=0.1613, simple_loss=0.2513, pruned_loss=0.03564, over 3577694.76 frames. ], batch size: 46, lr: 4.84e-03, grad_scale: 8.0 2023-03-09 17:08:38,736 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=83552.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:08:54,791 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8468, 5.3488, 2.7858, 5.1752, 5.0674, 5.3403, 5.1928, 2.7241], device='cuda:3'), covar=tensor([0.0245, 0.0079, 0.0849, 0.0078, 0.0078, 0.0095, 0.0097, 0.0975], device='cuda:3'), in_proj_covar=tensor([0.0091, 0.0082, 0.0098, 0.0097, 0.0088, 0.0078, 0.0086, 0.0097], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 17:09:39,867 INFO [train.py:898] (3/4) Epoch 24, batch 0, loss[loss=0.1344, simple_loss=0.2143, pruned_loss=0.0273, over 18387.00 frames. ], tot_loss[loss=0.1344, simple_loss=0.2143, pruned_loss=0.0273, over 18387.00 frames. ], batch size: 42, lr: 4.74e-03, grad_scale: 8.0 2023-03-09 17:09:39,867 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 17:09:49,028 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8750, 2.2680, 3.0000, 2.7529, 2.2562, 3.0743, 3.0049, 2.1880], device='cuda:3'), covar=tensor([0.0615, 0.1506, 0.0598, 0.0549, 0.1648, 0.0447, 0.0831, 0.1050], device='cuda:3'), in_proj_covar=tensor([0.0212, 0.0238, 0.0219, 0.0166, 0.0226, 0.0212, 0.0251, 0.0195], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 17:09:51,563 INFO [train.py:932] (3/4) Epoch 24, validation: loss=0.1502, simple_loss=0.2499, pruned_loss=0.02529, over 944034.00 frames. 2023-03-09 17:09:51,564 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 17:10:00,560 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.679e+02 2.611e+02 3.172e+02 4.204e+02 1.377e+03, threshold=6.343e+02, percent-clipped=6.0 2023-03-09 17:10:34,269 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=83619.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:10:47,432 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3934, 2.8167, 2.4307, 2.7315, 3.5455, 3.4382, 3.0240, 2.8043], device='cuda:3'), covar=tensor([0.0225, 0.0274, 0.0583, 0.0418, 0.0168, 0.0170, 0.0364, 0.0381], device='cuda:3'), in_proj_covar=tensor([0.0141, 0.0140, 0.0164, 0.0161, 0.0135, 0.0120, 0.0158, 0.0160], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 17:10:50,365 INFO [train.py:898] (3/4) Epoch 24, batch 50, loss[loss=0.164, simple_loss=0.2577, pruned_loss=0.03515, over 18370.00 frames. ], tot_loss[loss=0.1638, simple_loss=0.2549, pruned_loss=0.03636, over 802515.65 frames. ], batch size: 50, lr: 4.74e-03, grad_scale: 8.0 2023-03-09 17:10:58,627 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=83640.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:11:03,759 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9539, 5.4941, 5.4670, 5.4449, 4.9811, 5.3917, 4.8708, 5.3785], device='cuda:3'), covar=tensor([0.0231, 0.0241, 0.0159, 0.0381, 0.0372, 0.0193, 0.0951, 0.0250], device='cuda:3'), in_proj_covar=tensor([0.0220, 0.0263, 0.0258, 0.0337, 0.0276, 0.0273, 0.0307, 0.0264], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 17:11:17,975 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0092, 4.2758, 2.8628, 4.2029, 5.3168, 2.8221, 4.0624, 4.1594], device='cuda:3'), covar=tensor([0.0185, 0.1143, 0.1320, 0.0587, 0.0078, 0.1092, 0.0581, 0.0615], device='cuda:3'), in_proj_covar=tensor([0.0175, 0.0274, 0.0207, 0.0200, 0.0133, 0.0185, 0.0219, 0.0228], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 17:11:30,058 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=83667.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:11:48,508 INFO [train.py:898] (3/4) Epoch 24, batch 100, loss[loss=0.1465, simple_loss=0.2435, pruned_loss=0.02474, over 18394.00 frames. ], tot_loss[loss=0.1632, simple_loss=0.2534, pruned_loss=0.03649, over 1415870.30 frames. ], batch size: 50, lr: 4.74e-03, grad_scale: 8.0 2023-03-09 17:11:58,117 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.623e+02 2.468e+02 2.945e+02 3.630e+02 7.467e+02, threshold=5.891e+02, percent-clipped=1.0 2023-03-09 17:12:16,076 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5817, 2.3617, 2.5464, 2.5728, 3.1965, 4.8270, 4.8645, 3.4201], device='cuda:3'), covar=tensor([0.2046, 0.2537, 0.3255, 0.2063, 0.2494, 0.0274, 0.0346, 0.0995], device='cuda:3'), in_proj_covar=tensor([0.0311, 0.0352, 0.0391, 0.0283, 0.0392, 0.0252, 0.0298, 0.0262], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 17:12:20,150 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-09 17:12:29,949 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7591, 2.4340, 2.7455, 2.7501, 3.2629, 4.9684, 4.9758, 3.3485], device='cuda:3'), covar=tensor([0.1895, 0.2447, 0.2943, 0.1876, 0.2479, 0.0240, 0.0302, 0.1020], device='cuda:3'), in_proj_covar=tensor([0.0310, 0.0350, 0.0390, 0.0282, 0.0391, 0.0251, 0.0297, 0.0261], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 17:12:46,025 INFO [train.py:898] (3/4) Epoch 24, batch 150, loss[loss=0.1594, simple_loss=0.2578, pruned_loss=0.03048, over 16065.00 frames. ], tot_loss[loss=0.1626, simple_loss=0.2524, pruned_loss=0.03635, over 1888519.89 frames. ], batch size: 94, lr: 4.73e-03, grad_scale: 8.0 2023-03-09 17:13:18,084 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=83760.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:13:31,043 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9587, 3.8344, 5.1682, 4.6122, 3.5069, 3.1726, 4.6682, 5.3508], device='cuda:3'), covar=tensor([0.0786, 0.1531, 0.0184, 0.0378, 0.0957, 0.1175, 0.0362, 0.0196], device='cuda:3'), in_proj_covar=tensor([0.0153, 0.0280, 0.0163, 0.0186, 0.0195, 0.0194, 0.0199, 0.0206], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 17:13:39,861 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4124, 3.3379, 3.1767, 2.7165, 3.0997, 2.4243, 2.6355, 3.2658], device='cuda:3'), covar=tensor([0.0090, 0.0130, 0.0127, 0.0200, 0.0143, 0.0277, 0.0268, 0.0108], device='cuda:3'), in_proj_covar=tensor([0.0144, 0.0166, 0.0138, 0.0191, 0.0146, 0.0182, 0.0186, 0.0125], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 17:13:43,899 INFO [train.py:898] (3/4) Epoch 24, batch 200, loss[loss=0.1522, simple_loss=0.2474, pruned_loss=0.02853, over 18378.00 frames. ], tot_loss[loss=0.1617, simple_loss=0.2516, pruned_loss=0.03593, over 2254817.86 frames. ], batch size: 50, lr: 4.73e-03, grad_scale: 4.0 2023-03-09 17:13:53,810 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.895e+02 2.575e+02 2.989e+02 3.608e+02 5.254e+02, threshold=5.979e+02, percent-clipped=0.0 2023-03-09 17:14:15,891 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.18 vs. limit=5.0 2023-03-09 17:14:29,045 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=83821.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:14:42,273 INFO [train.py:898] (3/4) Epoch 24, batch 250, loss[loss=0.1671, simple_loss=0.2636, pruned_loss=0.03527, over 18466.00 frames. ], tot_loss[loss=0.1616, simple_loss=0.2517, pruned_loss=0.03572, over 2538835.98 frames. ], batch size: 59, lr: 4.73e-03, grad_scale: 4.0 2023-03-09 17:15:04,766 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=83852.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:15:40,431 INFO [train.py:898] (3/4) Epoch 24, batch 300, loss[loss=0.174, simple_loss=0.2674, pruned_loss=0.04032, over 18371.00 frames. ], tot_loss[loss=0.1613, simple_loss=0.2517, pruned_loss=0.03549, over 2781755.17 frames. ], batch size: 55, lr: 4.73e-03, grad_scale: 4.0 2023-03-09 17:15:50,703 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.883e+02 2.677e+02 3.133e+02 3.610e+02 5.796e+02, threshold=6.266e+02, percent-clipped=0.0 2023-03-09 17:16:00,992 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=83900.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:16:38,564 INFO [train.py:898] (3/4) Epoch 24, batch 350, loss[loss=0.1618, simple_loss=0.2573, pruned_loss=0.03317, over 18277.00 frames. ], tot_loss[loss=0.1617, simple_loss=0.2521, pruned_loss=0.03564, over 2954024.46 frames. ], batch size: 57, lr: 4.73e-03, grad_scale: 2.0 2023-03-09 17:16:44,629 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8379, 5.2270, 2.7994, 5.0903, 4.9694, 5.2461, 5.0836, 2.6966], device='cuda:3'), covar=tensor([0.0233, 0.0062, 0.0788, 0.0079, 0.0072, 0.0072, 0.0086, 0.0971], device='cuda:3'), in_proj_covar=tensor([0.0091, 0.0082, 0.0098, 0.0096, 0.0088, 0.0078, 0.0086, 0.0098], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 17:16:46,907 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=83940.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:16:56,906 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=83949.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:17:36,996 INFO [train.py:898] (3/4) Epoch 24, batch 400, loss[loss=0.1671, simple_loss=0.2637, pruned_loss=0.03525, over 17887.00 frames. ], tot_loss[loss=0.1612, simple_loss=0.2518, pruned_loss=0.03532, over 3091123.49 frames. ], batch size: 65, lr: 4.73e-03, grad_scale: 4.0 2023-03-09 17:17:39,575 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=83985.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:17:42,829 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=83988.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:17:48,087 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.909e+02 2.479e+02 2.899e+02 3.539e+02 5.465e+02, threshold=5.798e+02, percent-clipped=0.0 2023-03-09 17:18:13,532 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84010.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 17:18:39,727 INFO [train.py:898] (3/4) Epoch 24, batch 450, loss[loss=0.1417, simple_loss=0.2289, pruned_loss=0.02729, over 18506.00 frames. ], tot_loss[loss=0.1606, simple_loss=0.2507, pruned_loss=0.03528, over 3199469.80 frames. ], batch size: 47, lr: 4.73e-03, grad_scale: 4.0 2023-03-09 17:18:55,023 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84046.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:18:57,542 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-09 17:19:01,682 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9058, 3.9550, 3.7607, 3.4428, 3.5867, 3.0507, 3.1659, 3.8651], device='cuda:3'), covar=tensor([0.0063, 0.0103, 0.0091, 0.0125, 0.0102, 0.0180, 0.0184, 0.0077], device='cuda:3'), in_proj_covar=tensor([0.0146, 0.0167, 0.0139, 0.0192, 0.0148, 0.0182, 0.0188, 0.0126], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 17:19:38,450 INFO [train.py:898] (3/4) Epoch 24, batch 500, loss[loss=0.1276, simple_loss=0.2109, pruned_loss=0.02216, over 17681.00 frames. ], tot_loss[loss=0.16, simple_loss=0.2497, pruned_loss=0.03513, over 3281121.22 frames. ], batch size: 39, lr: 4.73e-03, grad_scale: 4.0 2023-03-09 17:19:40,984 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1610, 5.2798, 5.3745, 5.3486, 5.0845, 5.8893, 5.5016, 5.2093], device='cuda:3'), covar=tensor([0.1178, 0.0715, 0.0755, 0.0851, 0.1510, 0.0775, 0.0696, 0.1727], device='cuda:3'), in_proj_covar=tensor([0.0364, 0.0293, 0.0315, 0.0321, 0.0331, 0.0430, 0.0290, 0.0423], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 17:19:49,797 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.687e+02 2.790e+02 3.387e+02 4.220e+02 9.031e+02, threshold=6.774e+02, percent-clipped=2.0 2023-03-09 17:19:54,650 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84097.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:20:16,875 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84116.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:20:36,462 INFO [train.py:898] (3/4) Epoch 24, batch 550, loss[loss=0.1536, simple_loss=0.2417, pruned_loss=0.03272, over 18379.00 frames. ], tot_loss[loss=0.1598, simple_loss=0.2497, pruned_loss=0.03493, over 3353883.19 frames. ], batch size: 46, lr: 4.72e-03, grad_scale: 4.0 2023-03-09 17:21:05,087 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84158.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:21:12,617 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6456, 3.0759, 4.4512, 3.6970, 2.8059, 4.5969, 3.8720, 2.8741], device='cuda:3'), covar=tensor([0.0544, 0.1382, 0.0251, 0.0465, 0.1564, 0.0249, 0.0571, 0.0993], device='cuda:3'), in_proj_covar=tensor([0.0213, 0.0239, 0.0218, 0.0166, 0.0225, 0.0213, 0.0251, 0.0197], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 17:21:15,984 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8976, 5.4239, 2.8581, 5.2636, 5.1300, 5.4157, 5.3028, 2.7237], device='cuda:3'), covar=tensor([0.0219, 0.0057, 0.0733, 0.0070, 0.0070, 0.0063, 0.0078, 0.0945], device='cuda:3'), in_proj_covar=tensor([0.0091, 0.0082, 0.0097, 0.0096, 0.0088, 0.0077, 0.0087, 0.0098], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 17:21:33,938 INFO [train.py:898] (3/4) Epoch 24, batch 600, loss[loss=0.1583, simple_loss=0.2508, pruned_loss=0.03285, over 18610.00 frames. ], tot_loss[loss=0.1599, simple_loss=0.2498, pruned_loss=0.03495, over 3404241.84 frames. ], batch size: 52, lr: 4.72e-03, grad_scale: 4.0 2023-03-09 17:21:45,739 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.833e+02 2.553e+02 3.031e+02 3.741e+02 6.860e+02, threshold=6.062e+02, percent-clipped=1.0 2023-03-09 17:21:53,169 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84199.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:22:32,600 INFO [train.py:898] (3/4) Epoch 24, batch 650, loss[loss=0.1607, simple_loss=0.255, pruned_loss=0.03322, over 18468.00 frames. ], tot_loss[loss=0.1601, simple_loss=0.2501, pruned_loss=0.03506, over 3445985.46 frames. ], batch size: 51, lr: 4.72e-03, grad_scale: 4.0 2023-03-09 17:22:36,510 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84236.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:23:04,114 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84260.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:23:12,787 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5004, 5.5208, 5.7429, 5.7823, 5.4377, 6.2859, 5.9253, 5.4634], device='cuda:3'), covar=tensor([0.1132, 0.0578, 0.0644, 0.0721, 0.1380, 0.0680, 0.0628, 0.1718], device='cuda:3'), in_proj_covar=tensor([0.0370, 0.0297, 0.0319, 0.0324, 0.0335, 0.0434, 0.0291, 0.0425], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 17:23:30,930 INFO [train.py:898] (3/4) Epoch 24, batch 700, loss[loss=0.1618, simple_loss=0.2501, pruned_loss=0.03678, over 16224.00 frames. ], tot_loss[loss=0.1596, simple_loss=0.2495, pruned_loss=0.03484, over 3474225.35 frames. ], batch size: 94, lr: 4.72e-03, grad_scale: 4.0 2023-03-09 17:23:42,558 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.851e+02 2.577e+02 2.948e+02 3.805e+02 8.453e+02, threshold=5.897e+02, percent-clipped=4.0 2023-03-09 17:23:47,518 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84297.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:23:54,346 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9024, 3.2702, 4.6315, 3.8943, 3.0245, 4.8686, 4.1186, 3.3412], device='cuda:3'), covar=tensor([0.0486, 0.1351, 0.0254, 0.0492, 0.1471, 0.0230, 0.0561, 0.0868], device='cuda:3'), in_proj_covar=tensor([0.0215, 0.0242, 0.0220, 0.0168, 0.0227, 0.0216, 0.0254, 0.0199], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 17:23:56,356 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84305.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 17:24:06,608 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7176, 2.9158, 4.4023, 3.7035, 2.9893, 4.6527, 3.9327, 2.9969], device='cuda:3'), covar=tensor([0.0516, 0.1582, 0.0290, 0.0483, 0.1348, 0.0225, 0.0567, 0.0926], device='cuda:3'), in_proj_covar=tensor([0.0215, 0.0243, 0.0221, 0.0168, 0.0227, 0.0216, 0.0255, 0.0200], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 17:24:29,565 INFO [train.py:898] (3/4) Epoch 24, batch 750, loss[loss=0.1589, simple_loss=0.2455, pruned_loss=0.03615, over 18422.00 frames. ], tot_loss[loss=0.1594, simple_loss=0.2492, pruned_loss=0.03477, over 3504649.08 frames. ], batch size: 48, lr: 4.72e-03, grad_scale: 4.0 2023-03-09 17:24:33,254 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84336.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:24:38,588 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84341.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:25:27,052 INFO [train.py:898] (3/4) Epoch 24, batch 800, loss[loss=0.1755, simple_loss=0.2675, pruned_loss=0.04178, over 18353.00 frames. ], tot_loss[loss=0.16, simple_loss=0.2499, pruned_loss=0.03505, over 3523349.95 frames. ], batch size: 56, lr: 4.72e-03, grad_scale: 8.0 2023-03-09 17:25:39,058 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.586e+02 2.623e+02 3.073e+02 3.761e+02 8.070e+02, threshold=6.147e+02, percent-clipped=7.0 2023-03-09 17:25:43,875 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84397.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:26:05,757 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84416.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:26:25,029 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7126, 3.5413, 2.5685, 4.5176, 3.2063, 4.3613, 2.6975, 4.1175], device='cuda:3'), covar=tensor([0.0632, 0.0809, 0.1320, 0.0505, 0.0806, 0.0294, 0.1150, 0.0413], device='cuda:3'), in_proj_covar=tensor([0.0220, 0.0230, 0.0193, 0.0292, 0.0195, 0.0271, 0.0206, 0.0206], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 17:26:25,776 INFO [train.py:898] (3/4) Epoch 24, batch 850, loss[loss=0.1474, simple_loss=0.2319, pruned_loss=0.03139, over 18490.00 frames. ], tot_loss[loss=0.1599, simple_loss=0.2496, pruned_loss=0.03509, over 3539234.01 frames. ], batch size: 44, lr: 4.72e-03, grad_scale: 8.0 2023-03-09 17:26:36,816 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-09 17:26:49,165 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84453.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:27:01,411 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=84464.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:27:23,407 INFO [train.py:898] (3/4) Epoch 24, batch 900, loss[loss=0.1443, simple_loss=0.2236, pruned_loss=0.03246, over 17706.00 frames. ], tot_loss[loss=0.1604, simple_loss=0.2502, pruned_loss=0.0353, over 3556846.83 frames. ], batch size: 39, lr: 4.71e-03, grad_scale: 8.0 2023-03-09 17:27:35,099 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.772e+02 2.570e+02 3.063e+02 4.034e+02 9.733e+02, threshold=6.126e+02, percent-clipped=4.0 2023-03-09 17:28:11,998 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84525.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:28:21,169 INFO [train.py:898] (3/4) Epoch 24, batch 950, loss[loss=0.1626, simple_loss=0.2507, pruned_loss=0.03729, over 18277.00 frames. ], tot_loss[loss=0.1602, simple_loss=0.2499, pruned_loss=0.03519, over 3575847.52 frames. ], batch size: 49, lr: 4.71e-03, grad_scale: 8.0 2023-03-09 17:28:47,307 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84555.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:29:19,841 INFO [train.py:898] (3/4) Epoch 24, batch 1000, loss[loss=0.1662, simple_loss=0.2561, pruned_loss=0.03819, over 18394.00 frames. ], tot_loss[loss=0.1601, simple_loss=0.2498, pruned_loss=0.03521, over 3582577.36 frames. ], batch size: 52, lr: 4.71e-03, grad_scale: 8.0 2023-03-09 17:29:24,258 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84586.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:29:30,785 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84592.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:29:31,657 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.810e+02 2.603e+02 3.017e+02 3.476e+02 5.470e+02, threshold=6.035e+02, percent-clipped=0.0 2023-03-09 17:29:46,407 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84605.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:29:58,494 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.60 vs. limit=2.0 2023-03-09 17:30:11,665 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8793, 3.9143, 3.7415, 3.3447, 3.5775, 2.9970, 3.0306, 3.9234], device='cuda:3'), covar=tensor([0.0070, 0.0095, 0.0083, 0.0137, 0.0111, 0.0190, 0.0213, 0.0070], device='cuda:3'), in_proj_covar=tensor([0.0147, 0.0168, 0.0140, 0.0192, 0.0149, 0.0183, 0.0188, 0.0126], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 17:30:17,819 INFO [train.py:898] (3/4) Epoch 24, batch 1050, loss[loss=0.1777, simple_loss=0.2693, pruned_loss=0.04306, over 18337.00 frames. ], tot_loss[loss=0.1597, simple_loss=0.2492, pruned_loss=0.03507, over 3583644.63 frames. ], batch size: 56, lr: 4.71e-03, grad_scale: 8.0 2023-03-09 17:30:18,174 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84633.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:30:27,835 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84641.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:30:33,602 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9008, 3.7048, 5.1403, 4.5811, 3.5178, 3.2655, 4.5227, 5.2976], device='cuda:3'), covar=tensor([0.0828, 0.1705, 0.0238, 0.0406, 0.0922, 0.1178, 0.0386, 0.0232], device='cuda:3'), in_proj_covar=tensor([0.0152, 0.0278, 0.0163, 0.0184, 0.0194, 0.0194, 0.0198, 0.0204], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 17:30:41,844 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=84653.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:31:16,086 INFO [train.py:898] (3/4) Epoch 24, batch 1100, loss[loss=0.1661, simple_loss=0.2547, pruned_loss=0.03875, over 18250.00 frames. ], tot_loss[loss=0.1599, simple_loss=0.2496, pruned_loss=0.03509, over 3572896.30 frames. ], batch size: 60, lr: 4.71e-03, grad_scale: 8.0 2023-03-09 17:31:23,089 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=84689.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:31:27,983 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84692.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:31:28,855 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.674e+02 2.619e+02 3.034e+02 3.649e+02 7.124e+02, threshold=6.067e+02, percent-clipped=2.0 2023-03-09 17:31:30,479 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84694.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:31:49,116 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2371, 3.9248, 5.3626, 3.2738, 4.6553, 2.8525, 3.3509, 2.0271], device='cuda:3'), covar=tensor([0.0983, 0.0859, 0.0114, 0.0894, 0.0482, 0.2419, 0.2482, 0.2098], device='cuda:3'), in_proj_covar=tensor([0.0223, 0.0245, 0.0205, 0.0201, 0.0259, 0.0273, 0.0328, 0.0239], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 17:32:14,531 INFO [train.py:898] (3/4) Epoch 24, batch 1150, loss[loss=0.1555, simple_loss=0.2469, pruned_loss=0.03203, over 18396.00 frames. ], tot_loss[loss=0.1602, simple_loss=0.2499, pruned_loss=0.0352, over 3584049.94 frames. ], batch size: 52, lr: 4.71e-03, grad_scale: 8.0 2023-03-09 17:32:36,223 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9820, 3.3358, 4.6451, 3.9884, 3.1548, 4.8014, 4.0990, 3.3004], device='cuda:3'), covar=tensor([0.0374, 0.1232, 0.0232, 0.0402, 0.1253, 0.0247, 0.0501, 0.0840], device='cuda:3'), in_proj_covar=tensor([0.0214, 0.0242, 0.0220, 0.0169, 0.0227, 0.0216, 0.0254, 0.0199], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 17:32:38,334 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84753.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:33:06,582 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-09 17:33:12,319 INFO [train.py:898] (3/4) Epoch 24, batch 1200, loss[loss=0.1537, simple_loss=0.2405, pruned_loss=0.03343, over 18508.00 frames. ], tot_loss[loss=0.1604, simple_loss=0.2501, pruned_loss=0.03529, over 3583002.47 frames. ], batch size: 47, lr: 4.71e-03, grad_scale: 8.0 2023-03-09 17:33:24,631 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.879e+02 2.822e+02 3.199e+02 4.147e+02 1.139e+03, threshold=6.397e+02, percent-clipped=6.0 2023-03-09 17:33:32,840 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84800.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:33:33,883 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=84801.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:34:10,446 INFO [train.py:898] (3/4) Epoch 24, batch 1250, loss[loss=0.1614, simple_loss=0.2614, pruned_loss=0.03068, over 18555.00 frames. ], tot_loss[loss=0.1606, simple_loss=0.2503, pruned_loss=0.03547, over 3586881.46 frames. ], batch size: 49, lr: 4.70e-03, grad_scale: 4.0 2023-03-09 17:34:18,881 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8604, 4.5283, 4.3783, 4.4904, 4.0786, 4.3832, 4.7312, 4.5649], device='cuda:3'), covar=tensor([0.2877, 0.1529, 0.2658, 0.1405, 0.2930, 0.1481, 0.1290, 0.1550], device='cuda:3'), in_proj_covar=tensor([0.0625, 0.0551, 0.0389, 0.0570, 0.0770, 0.0561, 0.0784, 0.0596], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 17:34:27,878 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0030, 4.5592, 4.2193, 4.3117, 4.0165, 4.7408, 4.4712, 4.1604], device='cuda:3'), covar=tensor([0.1532, 0.1317, 0.1153, 0.1022, 0.1605, 0.1245, 0.0801, 0.1847], device='cuda:3'), in_proj_covar=tensor([0.0367, 0.0296, 0.0317, 0.0322, 0.0333, 0.0433, 0.0291, 0.0425], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 17:34:32,772 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9571, 3.7953, 5.1565, 2.9099, 4.4987, 2.6851, 3.1334, 1.8896], device='cuda:3'), covar=tensor([0.1182, 0.0922, 0.0231, 0.1048, 0.0526, 0.2800, 0.2866, 0.2269], device='cuda:3'), in_proj_covar=tensor([0.0227, 0.0250, 0.0210, 0.0205, 0.0265, 0.0278, 0.0335, 0.0244], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 17:34:36,639 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84855.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:34:43,635 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84861.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:35:06,905 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84881.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:35:08,995 INFO [train.py:898] (3/4) Epoch 24, batch 1300, loss[loss=0.1416, simple_loss=0.229, pruned_loss=0.02709, over 18280.00 frames. ], tot_loss[loss=0.1608, simple_loss=0.2506, pruned_loss=0.03554, over 3575164.21 frames. ], batch size: 47, lr: 4.70e-03, grad_scale: 4.0 2023-03-09 17:35:19,285 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84892.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:35:21,285 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.784e+02 2.717e+02 3.045e+02 3.603e+02 7.814e+02, threshold=6.090e+02, percent-clipped=3.0 2023-03-09 17:35:22,829 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8298, 5.0310, 2.6368, 4.9037, 4.7786, 5.0481, 4.8526, 2.5337], device='cuda:3'), covar=tensor([0.0227, 0.0076, 0.0776, 0.0087, 0.0093, 0.0092, 0.0100, 0.1038], device='cuda:3'), in_proj_covar=tensor([0.0089, 0.0081, 0.0096, 0.0096, 0.0087, 0.0076, 0.0085, 0.0096], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 17:35:31,921 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=84903.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:36:07,823 INFO [train.py:898] (3/4) Epoch 24, batch 1350, loss[loss=0.1668, simple_loss=0.2618, pruned_loss=0.0359, over 18223.00 frames. ], tot_loss[loss=0.1603, simple_loss=0.2501, pruned_loss=0.03525, over 3572957.74 frames. ], batch size: 60, lr: 4.70e-03, grad_scale: 4.0 2023-03-09 17:36:16,022 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=84940.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:36:25,404 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4460, 3.8436, 2.3891, 3.7211, 4.7163, 2.3946, 3.3895, 3.6915], device='cuda:3'), covar=tensor([0.0269, 0.1228, 0.1712, 0.0719, 0.0120, 0.1372, 0.0819, 0.0781], device='cuda:3'), in_proj_covar=tensor([0.0175, 0.0274, 0.0208, 0.0202, 0.0134, 0.0185, 0.0219, 0.0228], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 17:36:28,854 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84951.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:36:43,754 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7964, 5.2538, 5.2790, 5.2990, 4.7517, 5.1703, 4.5448, 5.1244], device='cuda:3'), covar=tensor([0.0277, 0.0313, 0.0209, 0.0382, 0.0435, 0.0240, 0.1220, 0.0368], device='cuda:3'), in_proj_covar=tensor([0.0221, 0.0269, 0.0261, 0.0340, 0.0279, 0.0276, 0.0312, 0.0268], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 17:37:05,447 INFO [train.py:898] (3/4) Epoch 24, batch 1400, loss[loss=0.1655, simple_loss=0.2569, pruned_loss=0.03703, over 17906.00 frames. ], tot_loss[loss=0.1603, simple_loss=0.2502, pruned_loss=0.03515, over 3574432.91 frames. ], batch size: 70, lr: 4.70e-03, grad_scale: 4.0 2023-03-09 17:37:13,042 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84989.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:37:15,556 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0025, 3.8130, 4.9436, 4.6167, 3.4114, 3.2682, 4.7414, 5.2830], device='cuda:3'), covar=tensor([0.0774, 0.1527, 0.0296, 0.0375, 0.0934, 0.1117, 0.0325, 0.0360], device='cuda:3'), in_proj_covar=tensor([0.0153, 0.0280, 0.0164, 0.0185, 0.0195, 0.0194, 0.0199, 0.0205], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 17:37:16,488 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84992.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:37:18,346 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.714e+02 2.377e+02 2.923e+02 3.851e+02 7.184e+02, threshold=5.846e+02, percent-clipped=2.0 2023-03-09 17:37:39,039 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85012.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:37:41,802 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5833, 5.4991, 5.1605, 5.5204, 5.4595, 4.8714, 5.3940, 5.1095], device='cuda:3'), covar=tensor([0.0406, 0.0499, 0.1347, 0.0668, 0.0648, 0.0413, 0.0436, 0.1065], device='cuda:3'), in_proj_covar=tensor([0.0510, 0.0575, 0.0722, 0.0449, 0.0473, 0.0525, 0.0559, 0.0699], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 17:38:03,950 INFO [train.py:898] (3/4) Epoch 24, batch 1450, loss[loss=0.1301, simple_loss=0.2127, pruned_loss=0.02373, over 18456.00 frames. ], tot_loss[loss=0.1604, simple_loss=0.2503, pruned_loss=0.03524, over 3580210.03 frames. ], batch size: 43, lr: 4.70e-03, grad_scale: 4.0 2023-03-09 17:38:12,402 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=85040.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:38:38,553 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7508, 3.6255, 4.9647, 4.4374, 3.3781, 3.2126, 4.4482, 5.1597], device='cuda:3'), covar=tensor([0.0831, 0.1611, 0.0193, 0.0400, 0.0937, 0.1101, 0.0391, 0.0251], device='cuda:3'), in_proj_covar=tensor([0.0152, 0.0279, 0.0164, 0.0185, 0.0195, 0.0194, 0.0198, 0.0205], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 17:38:58,479 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7253, 3.6025, 2.5973, 4.4711, 3.2225, 4.3293, 2.5678, 4.1729], device='cuda:3'), covar=tensor([0.0622, 0.0837, 0.1341, 0.0537, 0.0822, 0.0315, 0.1254, 0.0402], device='cuda:3'), in_proj_covar=tensor([0.0220, 0.0230, 0.0193, 0.0292, 0.0196, 0.0274, 0.0206, 0.0207], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 17:39:01,460 INFO [train.py:898] (3/4) Epoch 24, batch 1500, loss[loss=0.1355, simple_loss=0.2207, pruned_loss=0.02519, over 18468.00 frames. ], tot_loss[loss=0.1605, simple_loss=0.2504, pruned_loss=0.03527, over 3586965.69 frames. ], batch size: 44, lr: 4.70e-03, grad_scale: 4.0 2023-03-09 17:39:06,215 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8898, 5.0300, 2.2681, 4.9307, 4.8025, 5.0522, 4.8397, 2.2778], device='cuda:3'), covar=tensor([0.0265, 0.0173, 0.1167, 0.0122, 0.0128, 0.0173, 0.0179, 0.1685], device='cuda:3'), in_proj_covar=tensor([0.0090, 0.0082, 0.0096, 0.0096, 0.0087, 0.0077, 0.0086, 0.0097], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 17:39:14,114 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.797e+02 2.574e+02 3.009e+02 3.655e+02 5.814e+02, threshold=6.018e+02, percent-clipped=0.0 2023-03-09 17:39:28,971 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85107.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:39:31,633 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-09 17:39:37,875 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85115.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:39:59,274 INFO [train.py:898] (3/4) Epoch 24, batch 1550, loss[loss=0.1701, simple_loss=0.2661, pruned_loss=0.03707, over 17733.00 frames. ], tot_loss[loss=0.161, simple_loss=0.2513, pruned_loss=0.03538, over 3590839.03 frames. ], batch size: 70, lr: 4.70e-03, grad_scale: 4.0 2023-03-09 17:40:25,884 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85156.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:40:31,866 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6060, 3.8813, 2.5732, 3.7722, 4.8260, 2.6458, 3.5686, 3.6206], device='cuda:3'), covar=tensor([0.0194, 0.1297, 0.1401, 0.0656, 0.0117, 0.1115, 0.0654, 0.0803], device='cuda:3'), in_proj_covar=tensor([0.0175, 0.0272, 0.0206, 0.0201, 0.0133, 0.0185, 0.0218, 0.0227], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 17:40:39,452 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85168.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 17:40:48,509 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85176.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:40:55,234 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=85181.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:40:57,800 INFO [train.py:898] (3/4) Epoch 24, batch 1600, loss[loss=0.1741, simple_loss=0.2683, pruned_loss=0.03998, over 17934.00 frames. ], tot_loss[loss=0.161, simple_loss=0.251, pruned_loss=0.03549, over 3585069.47 frames. ], batch size: 70, lr: 4.69e-03, grad_scale: 8.0 2023-03-09 17:41:10,171 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.65 vs. limit=2.0 2023-03-09 17:41:10,349 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.996e+02 2.511e+02 2.791e+02 3.550e+02 5.356e+02, threshold=5.582e+02, percent-clipped=0.0 2023-03-09 17:41:51,990 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=85229.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:41:57,476 INFO [train.py:898] (3/4) Epoch 24, batch 1650, loss[loss=0.1877, simple_loss=0.2723, pruned_loss=0.05159, over 12225.00 frames. ], tot_loss[loss=0.1607, simple_loss=0.2507, pruned_loss=0.03533, over 3580432.19 frames. ], batch size: 129, lr: 4.69e-03, grad_scale: 8.0 2023-03-09 17:42:02,412 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5792, 2.3289, 2.5511, 2.5947, 3.2665, 4.6783, 4.5601, 3.3486], device='cuda:3'), covar=tensor([0.1933, 0.2494, 0.2945, 0.1925, 0.2299, 0.0266, 0.0400, 0.0966], device='cuda:3'), in_proj_covar=tensor([0.0313, 0.0354, 0.0393, 0.0284, 0.0394, 0.0252, 0.0300, 0.0264], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:3') 2023-03-09 17:42:22,177 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7604, 4.5352, 4.4720, 3.2222, 3.7291, 3.4117, 2.6382, 2.5833], device='cuda:3'), covar=tensor([0.0249, 0.0158, 0.0091, 0.0339, 0.0351, 0.0241, 0.0739, 0.0806], device='cuda:3'), in_proj_covar=tensor([0.0072, 0.0061, 0.0065, 0.0069, 0.0091, 0.0068, 0.0077, 0.0084], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 17:42:55,803 INFO [train.py:898] (3/4) Epoch 24, batch 1700, loss[loss=0.1477, simple_loss=0.2382, pruned_loss=0.02861, over 18524.00 frames. ], tot_loss[loss=0.1611, simple_loss=0.2512, pruned_loss=0.0355, over 3575388.25 frames. ], batch size: 49, lr: 4.69e-03, grad_scale: 8.0 2023-03-09 17:43:03,271 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=85289.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:43:08,660 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.937e+02 2.815e+02 3.263e+02 3.900e+02 7.200e+02, threshold=6.526e+02, percent-clipped=2.0 2023-03-09 17:43:23,862 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85307.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:43:41,512 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6874, 2.3111, 2.5767, 2.5615, 3.1172, 4.6847, 4.5954, 3.1665], device='cuda:3'), covar=tensor([0.1914, 0.2465, 0.2916, 0.1960, 0.2504, 0.0264, 0.0405, 0.1060], device='cuda:3'), in_proj_covar=tensor([0.0314, 0.0356, 0.0395, 0.0285, 0.0396, 0.0254, 0.0301, 0.0265], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:3') 2023-03-09 17:43:49,063 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8099, 5.2023, 5.2160, 5.2800, 4.6456, 5.1243, 4.5076, 5.1186], device='cuda:3'), covar=tensor([0.0254, 0.0364, 0.0241, 0.0373, 0.0514, 0.0253, 0.1336, 0.0327], device='cuda:3'), in_proj_covar=tensor([0.0221, 0.0268, 0.0262, 0.0342, 0.0279, 0.0275, 0.0312, 0.0267], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0006, 0.0006, 0.0007, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 17:43:52,179 INFO [train.py:898] (3/4) Epoch 24, batch 1750, loss[loss=0.1398, simple_loss=0.2296, pruned_loss=0.02501, over 18501.00 frames. ], tot_loss[loss=0.1608, simple_loss=0.2509, pruned_loss=0.03534, over 3575432.52 frames. ], batch size: 47, lr: 4.69e-03, grad_scale: 8.0 2023-03-09 17:43:57,786 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=85337.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:44:22,497 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-09 17:44:48,394 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8101, 4.0608, 2.4766, 4.0503, 5.1047, 2.7760, 3.6300, 3.6825], device='cuda:3'), covar=tensor([0.0167, 0.1402, 0.1416, 0.0567, 0.0088, 0.0984, 0.0669, 0.0835], device='cuda:3'), in_proj_covar=tensor([0.0175, 0.0274, 0.0206, 0.0201, 0.0134, 0.0185, 0.0219, 0.0227], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 17:44:49,076 INFO [train.py:898] (3/4) Epoch 24, batch 1800, loss[loss=0.1811, simple_loss=0.2667, pruned_loss=0.04771, over 18131.00 frames. ], tot_loss[loss=0.1607, simple_loss=0.2505, pruned_loss=0.03542, over 3566704.78 frames. ], batch size: 62, lr: 4.69e-03, grad_scale: 8.0 2023-03-09 17:45:01,673 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2396, 5.7494, 5.4077, 5.5210, 5.3466, 5.1981, 5.8061, 5.7503], device='cuda:3'), covar=tensor([0.1174, 0.0715, 0.0552, 0.0765, 0.1422, 0.0677, 0.0611, 0.0728], device='cuda:3'), in_proj_covar=tensor([0.0624, 0.0547, 0.0391, 0.0573, 0.0768, 0.0560, 0.0782, 0.0594], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 17:45:02,452 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.840e+02 2.661e+02 3.005e+02 3.822e+02 6.057e+02, threshold=6.010e+02, percent-clipped=0.0 2023-03-09 17:45:11,037 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8479, 3.4419, 2.6793, 3.3724, 3.9702, 2.6166, 3.3065, 3.3802], device='cuda:3'), covar=tensor([0.0294, 0.0972, 0.1316, 0.0671, 0.0182, 0.1059, 0.0695, 0.0743], device='cuda:3'), in_proj_covar=tensor([0.0175, 0.0275, 0.0206, 0.0202, 0.0134, 0.0186, 0.0220, 0.0228], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 17:45:27,469 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1505, 5.3457, 2.9557, 5.1994, 5.0629, 5.3350, 5.1666, 2.4872], device='cuda:3'), covar=tensor([0.0200, 0.0117, 0.0819, 0.0116, 0.0105, 0.0139, 0.0133, 0.1427], device='cuda:3'), in_proj_covar=tensor([0.0088, 0.0081, 0.0096, 0.0095, 0.0086, 0.0076, 0.0085, 0.0095], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 17:45:42,379 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85428.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:45:47,770 INFO [train.py:898] (3/4) Epoch 24, batch 1850, loss[loss=0.1575, simple_loss=0.2548, pruned_loss=0.0301, over 18368.00 frames. ], tot_loss[loss=0.1604, simple_loss=0.2502, pruned_loss=0.03532, over 3570790.97 frames. ], batch size: 50, lr: 4.69e-03, grad_scale: 8.0 2023-03-09 17:46:05,523 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4880, 5.9754, 5.5752, 5.7927, 5.6157, 5.3885, 6.0691, 6.0317], device='cuda:3'), covar=tensor([0.1116, 0.0762, 0.0415, 0.0767, 0.1343, 0.0734, 0.0556, 0.0696], device='cuda:3'), in_proj_covar=tensor([0.0622, 0.0544, 0.0389, 0.0571, 0.0765, 0.0559, 0.0780, 0.0593], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 17:46:15,534 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=85456.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:46:23,641 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85463.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 17:46:30,473 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6123, 3.6078, 3.4415, 3.1131, 3.3054, 2.7137, 2.6911, 3.5656], device='cuda:3'), covar=tensor([0.0071, 0.0102, 0.0090, 0.0143, 0.0106, 0.0204, 0.0234, 0.0075], device='cuda:3'), in_proj_covar=tensor([0.0146, 0.0166, 0.0139, 0.0190, 0.0148, 0.0181, 0.0187, 0.0125], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 17:46:32,424 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85471.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:46:45,576 INFO [train.py:898] (3/4) Epoch 24, batch 1900, loss[loss=0.1805, simple_loss=0.2792, pruned_loss=0.04085, over 17085.00 frames. ], tot_loss[loss=0.1614, simple_loss=0.2513, pruned_loss=0.03572, over 3574419.39 frames. ], batch size: 78, lr: 4.69e-03, grad_scale: 8.0 2023-03-09 17:46:46,003 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85483.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:46:52,942 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85489.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:46:58,614 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.851e+02 2.648e+02 3.187e+02 3.710e+02 7.319e+02, threshold=6.375e+02, percent-clipped=2.0 2023-03-09 17:47:10,832 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=85504.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:47:24,497 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1726, 4.3122, 2.7973, 4.4145, 5.4872, 3.1405, 4.3389, 4.0519], device='cuda:3'), covar=tensor([0.0137, 0.1200, 0.1217, 0.0455, 0.0058, 0.0828, 0.0392, 0.0687], device='cuda:3'), in_proj_covar=tensor([0.0176, 0.0275, 0.0207, 0.0202, 0.0135, 0.0187, 0.0221, 0.0228], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 17:47:40,227 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85530.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:47:43,283 INFO [train.py:898] (3/4) Epoch 24, batch 1950, loss[loss=0.1781, simple_loss=0.2681, pruned_loss=0.0441, over 17803.00 frames. ], tot_loss[loss=0.1614, simple_loss=0.2512, pruned_loss=0.03579, over 3577346.10 frames. ], batch size: 70, lr: 4.68e-03, grad_scale: 8.0 2023-03-09 17:47:51,466 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8129, 4.5866, 4.5018, 3.2952, 3.7675, 3.4128, 2.6791, 2.6617], device='cuda:3'), covar=tensor([0.0236, 0.0141, 0.0091, 0.0343, 0.0333, 0.0280, 0.0746, 0.0813], device='cuda:3'), in_proj_covar=tensor([0.0072, 0.0061, 0.0065, 0.0069, 0.0092, 0.0068, 0.0078, 0.0085], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 17:47:57,209 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85544.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:48:18,383 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.83 vs. limit=5.0 2023-03-09 17:48:28,465 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9170, 4.6256, 4.5666, 3.5243, 3.9237, 3.5790, 2.6752, 2.5806], device='cuda:3'), covar=tensor([0.0209, 0.0133, 0.0077, 0.0289, 0.0327, 0.0234, 0.0742, 0.0900], device='cuda:3'), in_proj_covar=tensor([0.0073, 0.0061, 0.0065, 0.0069, 0.0091, 0.0068, 0.0078, 0.0085], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 17:48:41,770 INFO [train.py:898] (3/4) Epoch 24, batch 2000, loss[loss=0.1559, simple_loss=0.2429, pruned_loss=0.0345, over 18451.00 frames. ], tot_loss[loss=0.1615, simple_loss=0.2515, pruned_loss=0.03577, over 3570033.10 frames. ], batch size: 43, lr: 4.68e-03, grad_scale: 8.0 2023-03-09 17:48:50,956 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85591.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:48:54,840 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4133, 5.9068, 5.4575, 5.7491, 5.4710, 5.3705, 5.9951, 5.9349], device='cuda:3'), covar=tensor([0.1274, 0.0809, 0.0526, 0.0746, 0.1582, 0.0736, 0.0571, 0.0771], device='cuda:3'), in_proj_covar=tensor([0.0629, 0.0547, 0.0394, 0.0575, 0.0773, 0.0567, 0.0786, 0.0600], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 17:48:55,652 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.729e+02 2.662e+02 3.281e+02 4.045e+02 7.861e+02, threshold=6.561e+02, percent-clipped=1.0 2023-03-09 17:49:10,699 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=85607.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:49:25,001 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6567, 3.4404, 5.0513, 3.1204, 4.3681, 2.5584, 3.0279, 1.7585], device='cuda:3'), covar=tensor([0.1283, 0.1030, 0.0120, 0.0832, 0.0530, 0.2595, 0.2483, 0.2193], device='cuda:3'), in_proj_covar=tensor([0.0226, 0.0248, 0.0208, 0.0203, 0.0261, 0.0275, 0.0332, 0.0243], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 17:49:31,505 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85625.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:49:40,102 INFO [train.py:898] (3/4) Epoch 24, batch 2050, loss[loss=0.1506, simple_loss=0.2314, pruned_loss=0.03493, over 18577.00 frames. ], tot_loss[loss=0.1616, simple_loss=0.2516, pruned_loss=0.03583, over 3575158.82 frames. ], batch size: 45, lr: 4.68e-03, grad_scale: 8.0 2023-03-09 17:50:06,869 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=85655.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:50:08,526 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.67 vs. limit=5.0 2023-03-09 17:50:38,509 INFO [train.py:898] (3/4) Epoch 24, batch 2100, loss[loss=0.1631, simple_loss=0.2372, pruned_loss=0.04453, over 17589.00 frames. ], tot_loss[loss=0.1616, simple_loss=0.2515, pruned_loss=0.03589, over 3559044.45 frames. ], batch size: 39, lr: 4.68e-03, grad_scale: 8.0 2023-03-09 17:50:42,190 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85686.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 17:50:52,034 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.890e+02 2.606e+02 3.124e+02 4.074e+02 9.215e+02, threshold=6.247e+02, percent-clipped=3.0 2023-03-09 17:51:02,139 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0033, 4.2994, 2.7401, 4.3222, 5.2901, 2.8739, 4.1039, 4.0936], device='cuda:3'), covar=tensor([0.0179, 0.1074, 0.1467, 0.0547, 0.0092, 0.1132, 0.0600, 0.0676], device='cuda:3'), in_proj_covar=tensor([0.0175, 0.0274, 0.0206, 0.0201, 0.0134, 0.0186, 0.0219, 0.0227], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 17:51:37,123 INFO [train.py:898] (3/4) Epoch 24, batch 2150, loss[loss=0.1625, simple_loss=0.2537, pruned_loss=0.03565, over 18595.00 frames. ], tot_loss[loss=0.1616, simple_loss=0.2513, pruned_loss=0.03594, over 3564189.74 frames. ], batch size: 54, lr: 4.68e-03, grad_scale: 8.0 2023-03-09 17:52:12,638 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=85763.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:52:21,599 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=85771.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:52:35,304 INFO [train.py:898] (3/4) Epoch 24, batch 2200, loss[loss=0.1371, simple_loss=0.2163, pruned_loss=0.02889, over 18392.00 frames. ], tot_loss[loss=0.1606, simple_loss=0.2503, pruned_loss=0.03547, over 3570596.37 frames. ], batch size: 42, lr: 4.68e-03, grad_scale: 4.0 2023-03-09 17:52:36,669 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85784.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:52:49,955 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.809e+02 2.630e+02 3.062e+02 3.735e+02 6.133e+02, threshold=6.125e+02, percent-clipped=0.0 2023-03-09 17:53:08,613 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=85811.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:53:17,771 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=85819.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:53:33,940 INFO [train.py:898] (3/4) Epoch 24, batch 2250, loss[loss=0.1541, simple_loss=0.2406, pruned_loss=0.03378, over 18398.00 frames. ], tot_loss[loss=0.1608, simple_loss=0.2503, pruned_loss=0.03564, over 3562280.61 frames. ], batch size: 48, lr: 4.68e-03, grad_scale: 4.0 2023-03-09 17:53:40,058 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0083, 4.2141, 2.6632, 4.3055, 5.3024, 2.8965, 4.0662, 4.0304], device='cuda:3'), covar=tensor([0.0168, 0.1084, 0.1436, 0.0544, 0.0091, 0.1082, 0.0553, 0.0667], device='cuda:3'), in_proj_covar=tensor([0.0174, 0.0273, 0.0205, 0.0200, 0.0133, 0.0185, 0.0218, 0.0225], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 17:53:40,993 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85839.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:54:13,099 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4768, 5.2408, 5.6938, 5.7882, 5.4083, 6.2146, 5.9368, 5.4425], device='cuda:3'), covar=tensor([0.1165, 0.0701, 0.0837, 0.0680, 0.1262, 0.0769, 0.0582, 0.1726], device='cuda:3'), in_proj_covar=tensor([0.0366, 0.0298, 0.0318, 0.0323, 0.0335, 0.0432, 0.0290, 0.0428], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 17:54:23,541 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85875.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:54:33,028 INFO [train.py:898] (3/4) Epoch 24, batch 2300, loss[loss=0.139, simple_loss=0.218, pruned_loss=0.02997, over 18434.00 frames. ], tot_loss[loss=0.1593, simple_loss=0.2488, pruned_loss=0.03494, over 3577685.94 frames. ], batch size: 42, lr: 4.68e-03, grad_scale: 4.0 2023-03-09 17:54:36,444 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85886.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:54:47,538 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.050e+02 2.637e+02 3.133e+02 3.716e+02 1.062e+03, threshold=6.266e+02, percent-clipped=3.0 2023-03-09 17:54:50,263 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6037, 3.5674, 3.4374, 3.1148, 3.3159, 2.7254, 2.6269, 3.6902], device='cuda:3'), covar=tensor([0.0070, 0.0092, 0.0094, 0.0151, 0.0120, 0.0205, 0.0244, 0.0068], device='cuda:3'), in_proj_covar=tensor([0.0148, 0.0168, 0.0142, 0.0194, 0.0150, 0.0184, 0.0190, 0.0127], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 17:54:56,818 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7712, 3.6112, 2.2935, 4.4856, 3.2177, 3.9815, 2.4471, 3.6973], device='cuda:3'), covar=tensor([0.0524, 0.0653, 0.1318, 0.0444, 0.0701, 0.0348, 0.1208, 0.0529], device='cuda:3'), in_proj_covar=tensor([0.0218, 0.0227, 0.0191, 0.0289, 0.0195, 0.0269, 0.0202, 0.0204], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 17:55:27,649 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85930.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 17:55:30,549 INFO [train.py:898] (3/4) Epoch 24, batch 2350, loss[loss=0.1846, simple_loss=0.27, pruned_loss=0.04958, over 18378.00 frames. ], tot_loss[loss=0.1599, simple_loss=0.2499, pruned_loss=0.03496, over 3584011.97 frames. ], batch size: 50, lr: 4.67e-03, grad_scale: 4.0 2023-03-09 17:55:34,685 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85936.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:55:58,242 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85957.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:56:26,831 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85981.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 17:56:28,820 INFO [train.py:898] (3/4) Epoch 24, batch 2400, loss[loss=0.1628, simple_loss=0.2584, pruned_loss=0.0336, over 17055.00 frames. ], tot_loss[loss=0.1598, simple_loss=0.2499, pruned_loss=0.03487, over 3577955.22 frames. ], batch size: 78, lr: 4.67e-03, grad_scale: 8.0 2023-03-09 17:56:39,038 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85991.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 17:56:44,281 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.063e+02 2.665e+02 3.240e+02 3.852e+02 9.011e+02, threshold=6.481e+02, percent-clipped=2.0 2023-03-09 17:56:56,565 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86002.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:56:59,981 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4791, 4.1608, 2.6082, 3.9187, 3.9801, 4.1547, 4.0160, 2.5811], device='cuda:3'), covar=tensor([0.0294, 0.0094, 0.0775, 0.0235, 0.0101, 0.0101, 0.0138, 0.1011], device='cuda:3'), in_proj_covar=tensor([0.0089, 0.0082, 0.0096, 0.0096, 0.0087, 0.0077, 0.0086, 0.0097], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 17:57:15,817 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86018.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:57:32,753 INFO [train.py:898] (3/4) Epoch 24, batch 2450, loss[loss=0.1538, simple_loss=0.2367, pruned_loss=0.03544, over 18346.00 frames. ], tot_loss[loss=0.1597, simple_loss=0.2496, pruned_loss=0.03491, over 3573940.15 frames. ], batch size: 46, lr: 4.67e-03, grad_scale: 8.0 2023-03-09 17:57:54,753 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86052.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:57:57,221 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.40 vs. limit=2.0 2023-03-09 17:58:07,281 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86063.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:58:12,813 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7089, 3.0659, 2.6090, 2.9597, 3.7517, 3.6495, 3.1990, 3.0273], device='cuda:3'), covar=tensor([0.0163, 0.0272, 0.0514, 0.0354, 0.0158, 0.0147, 0.0345, 0.0345], device='cuda:3'), in_proj_covar=tensor([0.0143, 0.0143, 0.0165, 0.0164, 0.0137, 0.0122, 0.0159, 0.0163], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 17:58:14,984 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86069.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:58:31,029 INFO [train.py:898] (3/4) Epoch 24, batch 2500, loss[loss=0.1777, simple_loss=0.2681, pruned_loss=0.0437, over 18232.00 frames. ], tot_loss[loss=0.1599, simple_loss=0.2498, pruned_loss=0.03495, over 3570321.60 frames. ], batch size: 60, lr: 4.67e-03, grad_scale: 4.0 2023-03-09 17:58:32,296 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86084.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:58:46,799 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.916e+02 2.583e+02 3.100e+02 3.790e+02 6.020e+02, threshold=6.201e+02, percent-clipped=0.0 2023-03-09 17:59:05,038 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86113.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:59:12,788 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6512, 4.9082, 2.6774, 4.7295, 4.6499, 4.8911, 4.7363, 2.5923], device='cuda:3'), covar=tensor([0.0284, 0.0071, 0.0762, 0.0103, 0.0087, 0.0087, 0.0103, 0.1030], device='cuda:3'), in_proj_covar=tensor([0.0090, 0.0082, 0.0097, 0.0096, 0.0088, 0.0077, 0.0086, 0.0098], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 17:59:25,987 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86130.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:59:27,920 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=86132.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:59:28,024 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3951, 5.9773, 5.5853, 5.7603, 5.5608, 5.3694, 6.0202, 5.9663], device='cuda:3'), covar=tensor([0.1261, 0.0741, 0.0453, 0.0685, 0.1304, 0.0699, 0.0579, 0.0700], device='cuda:3'), in_proj_covar=tensor([0.0615, 0.0537, 0.0383, 0.0563, 0.0759, 0.0558, 0.0770, 0.0588], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 17:59:28,882 INFO [train.py:898] (3/4) Epoch 24, batch 2550, loss[loss=0.1686, simple_loss=0.2649, pruned_loss=0.0361, over 18572.00 frames. ], tot_loss[loss=0.16, simple_loss=0.25, pruned_loss=0.035, over 3575392.47 frames. ], batch size: 54, lr: 4.67e-03, grad_scale: 4.0 2023-03-09 17:59:35,746 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86139.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 17:59:44,732 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.73 vs. limit=5.0 2023-03-09 18:00:26,686 INFO [train.py:898] (3/4) Epoch 24, batch 2600, loss[loss=0.1919, simple_loss=0.2889, pruned_loss=0.04749, over 17713.00 frames. ], tot_loss[loss=0.16, simple_loss=0.2503, pruned_loss=0.03489, over 3585076.40 frames. ], batch size: 70, lr: 4.67e-03, grad_scale: 4.0 2023-03-09 18:00:30,416 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86186.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:00:31,346 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=86187.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:00:43,000 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.777e+02 2.596e+02 3.106e+02 3.737e+02 7.665e+02, threshold=6.211e+02, percent-clipped=2.0 2023-03-09 18:00:54,719 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-09 18:01:23,324 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86231.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:01:25,433 INFO [train.py:898] (3/4) Epoch 24, batch 2650, loss[loss=0.1658, simple_loss=0.2559, pruned_loss=0.03786, over 18061.00 frames. ], tot_loss[loss=0.1598, simple_loss=0.2499, pruned_loss=0.03483, over 3580950.16 frames. ], batch size: 62, lr: 4.67e-03, grad_scale: 4.0 2023-03-09 18:01:26,656 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=86234.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:01:41,053 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86246.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:02:22,661 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86281.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:02:24,586 INFO [train.py:898] (3/4) Epoch 24, batch 2700, loss[loss=0.152, simple_loss=0.2492, pruned_loss=0.02744, over 18244.00 frames. ], tot_loss[loss=0.1593, simple_loss=0.2493, pruned_loss=0.03465, over 3576837.44 frames. ], batch size: 45, lr: 4.66e-03, grad_scale: 4.0 2023-03-09 18:02:28,144 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86286.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 18:02:41,075 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.842e+02 2.486e+02 2.896e+02 3.686e+02 7.740e+02, threshold=5.792e+02, percent-clipped=4.0 2023-03-09 18:02:52,859 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86307.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:02:59,446 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86313.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:03:18,480 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=86329.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:03:23,303 INFO [train.py:898] (3/4) Epoch 24, batch 2750, loss[loss=0.1351, simple_loss=0.2212, pruned_loss=0.02449, over 18395.00 frames. ], tot_loss[loss=0.1596, simple_loss=0.25, pruned_loss=0.03467, over 3586251.12 frames. ], batch size: 42, lr: 4.66e-03, grad_scale: 4.0 2023-03-09 18:03:52,852 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86358.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:03:55,330 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0110, 5.6103, 3.1991, 5.4103, 5.2623, 5.5763, 5.4770, 3.0877], device='cuda:3'), covar=tensor([0.0217, 0.0042, 0.0608, 0.0065, 0.0065, 0.0067, 0.0066, 0.0811], device='cuda:3'), in_proj_covar=tensor([0.0090, 0.0082, 0.0096, 0.0096, 0.0088, 0.0077, 0.0086, 0.0097], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 18:03:56,385 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5651, 6.1191, 5.6199, 5.9423, 5.8005, 5.5682, 6.2615, 6.1728], device='cuda:3'), covar=tensor([0.1182, 0.0724, 0.0450, 0.0683, 0.1204, 0.0720, 0.0497, 0.0683], device='cuda:3'), in_proj_covar=tensor([0.0617, 0.0541, 0.0387, 0.0569, 0.0760, 0.0561, 0.0774, 0.0593], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 18:04:05,250 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4523, 5.9410, 5.5335, 5.7243, 5.5811, 5.3993, 6.0509, 5.9572], device='cuda:3'), covar=tensor([0.1163, 0.0791, 0.0441, 0.0737, 0.1252, 0.0757, 0.0524, 0.0773], device='cuda:3'), in_proj_covar=tensor([0.0615, 0.0540, 0.0386, 0.0568, 0.0759, 0.0560, 0.0773, 0.0592], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 18:04:21,070 INFO [train.py:898] (3/4) Epoch 24, batch 2800, loss[loss=0.1541, simple_loss=0.2428, pruned_loss=0.03267, over 18534.00 frames. ], tot_loss[loss=0.1592, simple_loss=0.2494, pruned_loss=0.03444, over 3600530.48 frames. ], batch size: 49, lr: 4.66e-03, grad_scale: 8.0 2023-03-09 18:04:37,472 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.618e+02 2.577e+02 3.017e+02 3.554e+02 5.358e+02, threshold=6.034e+02, percent-clipped=0.0 2023-03-09 18:04:48,828 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0279, 4.7102, 4.8414, 3.7201, 4.0308, 3.6080, 3.0608, 3.2020], device='cuda:3'), covar=tensor([0.0215, 0.0161, 0.0080, 0.0256, 0.0306, 0.0223, 0.0588, 0.0626], device='cuda:3'), in_proj_covar=tensor([0.0072, 0.0062, 0.0065, 0.0069, 0.0091, 0.0068, 0.0078, 0.0085], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 18:04:50,982 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86408.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:05:10,725 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86425.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:05:19,754 INFO [train.py:898] (3/4) Epoch 24, batch 2850, loss[loss=0.1729, simple_loss=0.2677, pruned_loss=0.03907, over 17929.00 frames. ], tot_loss[loss=0.1588, simple_loss=0.249, pruned_loss=0.03431, over 3610927.07 frames. ], batch size: 65, lr: 4.66e-03, grad_scale: 8.0 2023-03-09 18:05:29,138 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86440.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:05:34,710 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86445.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:05:59,009 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.50 vs. limit=2.0 2023-03-09 18:06:18,046 INFO [train.py:898] (3/4) Epoch 24, batch 2900, loss[loss=0.145, simple_loss=0.2327, pruned_loss=0.02864, over 18383.00 frames. ], tot_loss[loss=0.1594, simple_loss=0.2496, pruned_loss=0.03454, over 3606131.69 frames. ], batch size: 46, lr: 4.66e-03, grad_scale: 4.0 2023-03-09 18:06:22,874 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.38 vs. limit=5.0 2023-03-09 18:06:36,142 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.815e+02 2.792e+02 3.220e+02 3.955e+02 8.779e+02, threshold=6.440e+02, percent-clipped=4.0 2023-03-09 18:06:39,850 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86501.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:06:45,533 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86506.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 18:07:14,360 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86531.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:07:14,453 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7994, 2.8841, 2.7500, 3.0768, 3.7700, 3.7143, 3.2956, 3.0970], device='cuda:3'), covar=tensor([0.0173, 0.0328, 0.0501, 0.0331, 0.0171, 0.0159, 0.0350, 0.0388], device='cuda:3'), in_proj_covar=tensor([0.0143, 0.0144, 0.0166, 0.0165, 0.0138, 0.0123, 0.0160, 0.0164], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 18:07:16,258 INFO [train.py:898] (3/4) Epoch 24, batch 2950, loss[loss=0.1612, simple_loss=0.2561, pruned_loss=0.03317, over 18495.00 frames. ], tot_loss[loss=0.1599, simple_loss=0.2503, pruned_loss=0.03477, over 3608692.23 frames. ], batch size: 51, lr: 4.66e-03, grad_scale: 4.0 2023-03-09 18:07:33,501 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5541, 3.4070, 2.3702, 4.3546, 2.9952, 4.1546, 2.4363, 3.8702], device='cuda:3'), covar=tensor([0.0664, 0.0866, 0.1364, 0.0462, 0.0877, 0.0286, 0.1183, 0.0422], device='cuda:3'), in_proj_covar=tensor([0.0221, 0.0230, 0.0194, 0.0293, 0.0197, 0.0274, 0.0205, 0.0207], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 18:08:10,863 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=86579.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:08:12,247 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86580.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:08:15,188 INFO [train.py:898] (3/4) Epoch 24, batch 3000, loss[loss=0.1505, simple_loss=0.2413, pruned_loss=0.02985, over 18359.00 frames. ], tot_loss[loss=0.1603, simple_loss=0.2506, pruned_loss=0.03502, over 3597544.14 frames. ], batch size: 55, lr: 4.66e-03, grad_scale: 4.0 2023-03-09 18:08:15,188 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 18:08:27,161 INFO [train.py:932] (3/4) Epoch 24, validation: loss=0.1501, simple_loss=0.2489, pruned_loss=0.02569, over 944034.00 frames. 2023-03-09 18:08:27,162 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 18:08:31,513 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86586.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 18:08:44,561 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.708e+02 2.585e+02 3.043e+02 3.671e+02 7.667e+02, threshold=6.086e+02, percent-clipped=3.0 2023-03-09 18:08:50,158 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86602.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:09:02,674 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86613.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:09:26,130 INFO [train.py:898] (3/4) Epoch 24, batch 3050, loss[loss=0.1994, simple_loss=0.283, pruned_loss=0.05788, over 12362.00 frames. ], tot_loss[loss=0.1603, simple_loss=0.2503, pruned_loss=0.03517, over 3580181.25 frames. ], batch size: 130, lr: 4.66e-03, grad_scale: 4.0 2023-03-09 18:09:27,512 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=86634.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 18:09:35,957 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86641.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:09:55,830 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86658.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:09:59,256 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=86661.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:10:04,810 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2456, 5.9523, 5.4521, 5.9244, 5.3568, 5.8485, 6.2157, 6.0169], device='cuda:3'), covar=tensor([0.2530, 0.1164, 0.0767, 0.1124, 0.2313, 0.1058, 0.0940, 0.1074], device='cuda:3'), in_proj_covar=tensor([0.0616, 0.0538, 0.0385, 0.0565, 0.0758, 0.0558, 0.0772, 0.0589], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 18:10:24,361 INFO [train.py:898] (3/4) Epoch 24, batch 3100, loss[loss=0.1712, simple_loss=0.2697, pruned_loss=0.03635, over 18401.00 frames. ], tot_loss[loss=0.16, simple_loss=0.2497, pruned_loss=0.03512, over 3578754.84 frames. ], batch size: 52, lr: 4.65e-03, grad_scale: 4.0 2023-03-09 18:10:41,644 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.778e+02 2.436e+02 2.944e+02 3.607e+02 6.796e+02, threshold=5.887e+02, percent-clipped=2.0 2023-03-09 18:10:51,068 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=86706.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:10:54,050 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86708.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:11:13,504 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86725.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:11:22,051 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0236, 5.0990, 5.1051, 4.7936, 4.8910, 4.8557, 5.1555, 5.1612], device='cuda:3'), covar=tensor([0.0065, 0.0053, 0.0055, 0.0109, 0.0056, 0.0147, 0.0078, 0.0101], device='cuda:3'), in_proj_covar=tensor([0.0098, 0.0072, 0.0077, 0.0096, 0.0077, 0.0107, 0.0090, 0.0089], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 18:11:22,884 INFO [train.py:898] (3/4) Epoch 24, batch 3150, loss[loss=0.1454, simple_loss=0.2374, pruned_loss=0.02668, over 18423.00 frames. ], tot_loss[loss=0.1589, simple_loss=0.2487, pruned_loss=0.03461, over 3588783.79 frames. ], batch size: 48, lr: 4.65e-03, grad_scale: 4.0 2023-03-09 18:11:49,641 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=86756.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:12:09,638 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=86773.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:12:20,955 INFO [train.py:898] (3/4) Epoch 24, batch 3200, loss[loss=0.1746, simple_loss=0.2599, pruned_loss=0.04467, over 18376.00 frames. ], tot_loss[loss=0.1589, simple_loss=0.2485, pruned_loss=0.03467, over 3594334.90 frames. ], batch size: 56, lr: 4.65e-03, grad_scale: 8.0 2023-03-09 18:12:27,488 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8005, 4.7608, 4.8358, 4.4672, 4.6192, 4.6895, 4.9439, 4.7819], device='cuda:3'), covar=tensor([0.0102, 0.0099, 0.0096, 0.0154, 0.0093, 0.0203, 0.0103, 0.0157], device='cuda:3'), in_proj_covar=tensor([0.0097, 0.0071, 0.0076, 0.0095, 0.0077, 0.0106, 0.0089, 0.0088], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 18:12:36,041 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86796.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:12:37,987 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.596e+02 2.507e+02 2.980e+02 3.406e+02 8.664e+02, threshold=5.960e+02, percent-clipped=4.0 2023-03-09 18:12:42,612 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86801.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 18:12:51,812 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7409, 5.2456, 5.1788, 5.1871, 4.6611, 5.0821, 4.5865, 5.1054], device='cuda:3'), covar=tensor([0.0270, 0.0319, 0.0224, 0.0467, 0.0427, 0.0260, 0.1099, 0.0357], device='cuda:3'), in_proj_covar=tensor([0.0224, 0.0272, 0.0268, 0.0348, 0.0283, 0.0278, 0.0315, 0.0271], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0008, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 18:12:57,928 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86814.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:13:19,230 INFO [train.py:898] (3/4) Epoch 24, batch 3250, loss[loss=0.1791, simple_loss=0.2726, pruned_loss=0.04283, over 18336.00 frames. ], tot_loss[loss=0.1585, simple_loss=0.2481, pruned_loss=0.03446, over 3602096.77 frames. ], batch size: 56, lr: 4.65e-03, grad_scale: 8.0 2023-03-09 18:13:31,500 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9096, 3.1305, 4.6674, 3.9162, 2.9672, 4.8252, 4.0615, 3.1034], device='cuda:3'), covar=tensor([0.0431, 0.1330, 0.0239, 0.0446, 0.1432, 0.0242, 0.0610, 0.0920], device='cuda:3'), in_proj_covar=tensor([0.0215, 0.0241, 0.0221, 0.0169, 0.0227, 0.0217, 0.0255, 0.0200], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 18:14:08,845 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86875.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:14:12,222 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86878.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:14:18,058 INFO [train.py:898] (3/4) Epoch 24, batch 3300, loss[loss=0.1607, simple_loss=0.2583, pruned_loss=0.03154, over 15889.00 frames. ], tot_loss[loss=0.1585, simple_loss=0.2486, pruned_loss=0.03426, over 3600335.46 frames. ], batch size: 94, lr: 4.65e-03, grad_scale: 8.0 2023-03-09 18:14:35,716 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.783e+02 2.678e+02 3.309e+02 3.868e+02 1.091e+03, threshold=6.618e+02, percent-clipped=4.0 2023-03-09 18:14:40,491 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86902.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:15:16,763 INFO [train.py:898] (3/4) Epoch 24, batch 3350, loss[loss=0.1691, simple_loss=0.2618, pruned_loss=0.03818, over 18460.00 frames. ], tot_loss[loss=0.1587, simple_loss=0.2488, pruned_loss=0.03428, over 3603188.15 frames. ], batch size: 59, lr: 4.65e-03, grad_scale: 8.0 2023-03-09 18:15:20,413 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86936.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:15:24,078 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86939.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:15:35,657 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8016, 5.3485, 5.3048, 5.2972, 4.7258, 5.1569, 4.6861, 5.1870], device='cuda:3'), covar=tensor([0.0292, 0.0294, 0.0201, 0.0476, 0.0437, 0.0255, 0.1068, 0.0320], device='cuda:3'), in_proj_covar=tensor([0.0223, 0.0269, 0.0266, 0.0347, 0.0282, 0.0277, 0.0313, 0.0269], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0008, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 18:15:36,571 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=86950.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:15:53,286 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0377, 3.6580, 4.9184, 4.2449, 3.1978, 2.9780, 4.2431, 5.1813], device='cuda:3'), covar=tensor([0.0762, 0.1410, 0.0221, 0.0487, 0.1094, 0.1287, 0.0481, 0.0217], device='cuda:3'), in_proj_covar=tensor([0.0153, 0.0282, 0.0166, 0.0185, 0.0195, 0.0194, 0.0198, 0.0206], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 18:16:07,797 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-09 18:16:14,779 INFO [train.py:898] (3/4) Epoch 24, batch 3400, loss[loss=0.1759, simple_loss=0.265, pruned_loss=0.0434, over 18411.00 frames. ], tot_loss[loss=0.1583, simple_loss=0.2484, pruned_loss=0.03411, over 3598063.76 frames. ], batch size: 48, lr: 4.65e-03, grad_scale: 8.0 2023-03-09 18:16:15,255 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9552, 3.2608, 4.6790, 3.9637, 3.1066, 4.9024, 4.1280, 3.2751], device='cuda:3'), covar=tensor([0.0447, 0.1240, 0.0210, 0.0422, 0.1336, 0.0220, 0.0507, 0.0846], device='cuda:3'), in_proj_covar=tensor([0.0214, 0.0240, 0.0220, 0.0169, 0.0226, 0.0216, 0.0253, 0.0199], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 18:16:32,669 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.872e+02 2.525e+02 2.880e+02 3.383e+02 6.366e+02, threshold=5.759e+02, percent-clipped=0.0 2023-03-09 18:16:38,095 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87002.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 18:17:13,579 INFO [train.py:898] (3/4) Epoch 24, batch 3450, loss[loss=0.1609, simple_loss=0.2558, pruned_loss=0.03307, over 18618.00 frames. ], tot_loss[loss=0.1585, simple_loss=0.2485, pruned_loss=0.03422, over 3593955.00 frames. ], batch size: 52, lr: 4.64e-03, grad_scale: 8.0 2023-03-09 18:17:48,476 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87063.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 18:18:07,101 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8215, 4.9869, 2.3412, 4.8845, 4.7071, 4.9346, 4.7564, 2.1797], device='cuda:3'), covar=tensor([0.0274, 0.0138, 0.1145, 0.0131, 0.0127, 0.0193, 0.0160, 0.1713], device='cuda:3'), in_proj_covar=tensor([0.0090, 0.0082, 0.0097, 0.0096, 0.0088, 0.0077, 0.0085, 0.0097], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 18:18:11,254 INFO [train.py:898] (3/4) Epoch 24, batch 3500, loss[loss=0.1648, simple_loss=0.2579, pruned_loss=0.03584, over 16506.00 frames. ], tot_loss[loss=0.159, simple_loss=0.2492, pruned_loss=0.03444, over 3595717.16 frames. ], batch size: 95, lr: 4.64e-03, grad_scale: 8.0 2023-03-09 18:18:26,478 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87096.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:18:28,429 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.855e+02 2.643e+02 3.056e+02 3.679e+02 7.050e+02, threshold=6.112e+02, percent-clipped=3.0 2023-03-09 18:18:32,120 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87101.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 18:18:33,365 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.3025, 3.7136, 2.4778, 3.5141, 4.4921, 2.5106, 3.5303, 3.6448], device='cuda:3'), covar=tensor([0.0251, 0.1093, 0.1585, 0.0754, 0.0145, 0.1272, 0.0689, 0.0722], device='cuda:3'), in_proj_covar=tensor([0.0174, 0.0274, 0.0205, 0.0199, 0.0134, 0.0185, 0.0218, 0.0225], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 18:19:06,667 INFO [train.py:898] (3/4) Epoch 24, batch 3550, loss[loss=0.163, simple_loss=0.2556, pruned_loss=0.03521, over 18483.00 frames. ], tot_loss[loss=0.1598, simple_loss=0.25, pruned_loss=0.03482, over 3590227.24 frames. ], batch size: 53, lr: 4.64e-03, grad_scale: 8.0 2023-03-09 18:19:18,592 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=87144.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:19:23,970 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=87149.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:19:46,346 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87170.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:20:00,497 INFO [train.py:898] (3/4) Epoch 24, batch 3600, loss[loss=0.1406, simple_loss=0.2343, pruned_loss=0.02342, over 18283.00 frames. ], tot_loss[loss=0.1595, simple_loss=0.2495, pruned_loss=0.03477, over 3578843.36 frames. ], batch size: 49, lr: 4.64e-03, grad_scale: 8.0 2023-03-09 18:20:16,547 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.830e+02 2.465e+02 3.001e+02 3.686e+02 6.054e+02, threshold=6.002e+02, percent-clipped=0.0 2023-03-09 18:20:21,455 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9039, 4.9959, 5.0332, 4.6905, 4.7481, 4.7856, 5.0748, 5.0942], device='cuda:3'), covar=tensor([0.0095, 0.0064, 0.0064, 0.0120, 0.0066, 0.0159, 0.0079, 0.0092], device='cuda:3'), in_proj_covar=tensor([0.0097, 0.0072, 0.0078, 0.0096, 0.0077, 0.0107, 0.0090, 0.0089], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 18:21:03,013 INFO [train.py:898] (3/4) Epoch 25, batch 0, loss[loss=0.1563, simple_loss=0.248, pruned_loss=0.03234, over 18289.00 frames. ], tot_loss[loss=0.1563, simple_loss=0.248, pruned_loss=0.03234, over 18289.00 frames. ], batch size: 49, lr: 4.54e-03, grad_scale: 8.0 2023-03-09 18:21:03,014 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 18:21:14,846 INFO [train.py:932] (3/4) Epoch 25, validation: loss=0.1499, simple_loss=0.2489, pruned_loss=0.0255, over 944034.00 frames. 2023-03-09 18:21:14,847 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 18:21:30,441 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87230.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:21:34,914 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87234.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:21:37,168 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87236.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:22:14,603 INFO [train.py:898] (3/4) Epoch 25, batch 50, loss[loss=0.147, simple_loss=0.2266, pruned_loss=0.03374, over 18562.00 frames. ], tot_loss[loss=0.1575, simple_loss=0.2485, pruned_loss=0.03325, over 816577.18 frames. ], batch size: 45, lr: 4.54e-03, grad_scale: 8.0 2023-03-09 18:22:34,853 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=87284.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:22:43,760 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87291.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:22:52,237 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.879e+02 2.489e+02 2.896e+02 3.619e+02 7.206e+02, threshold=5.791e+02, percent-clipped=1.0 2023-03-09 18:22:52,754 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8326, 3.6436, 5.0696, 3.0911, 4.3830, 2.6983, 3.1939, 1.9006], device='cuda:3'), covar=tensor([0.1152, 0.0909, 0.0158, 0.0905, 0.0521, 0.2406, 0.2471, 0.2129], device='cuda:3'), in_proj_covar=tensor([0.0225, 0.0248, 0.0212, 0.0203, 0.0263, 0.0277, 0.0331, 0.0242], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 18:22:53,838 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7434, 2.4531, 2.7302, 2.6992, 3.2185, 4.9342, 4.9045, 3.3288], device='cuda:3'), covar=tensor([0.1961, 0.2569, 0.2969, 0.2004, 0.2484, 0.0228, 0.0333, 0.1064], device='cuda:3'), in_proj_covar=tensor([0.0314, 0.0355, 0.0393, 0.0284, 0.0392, 0.0253, 0.0299, 0.0265], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 18:23:14,363 INFO [train.py:898] (3/4) Epoch 25, batch 100, loss[loss=0.1627, simple_loss=0.2601, pruned_loss=0.03264, over 18382.00 frames. ], tot_loss[loss=0.1591, simple_loss=0.2502, pruned_loss=0.03401, over 1436751.68 frames. ], batch size: 55, lr: 4.54e-03, grad_scale: 8.0 2023-03-09 18:23:40,601 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5978, 2.2090, 2.4762, 2.5622, 2.9733, 4.5010, 4.4711, 3.1557], device='cuda:3'), covar=tensor([0.2050, 0.2734, 0.3128, 0.2066, 0.2630, 0.0325, 0.0432, 0.1093], device='cuda:3'), in_proj_covar=tensor([0.0315, 0.0356, 0.0394, 0.0285, 0.0393, 0.0254, 0.0300, 0.0265], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 18:23:53,185 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-09 18:24:02,711 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-09 18:24:03,156 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87358.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 18:24:08,406 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87362.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:24:13,641 INFO [train.py:898] (3/4) Epoch 25, batch 150, loss[loss=0.1456, simple_loss=0.2375, pruned_loss=0.0268, over 18350.00 frames. ], tot_loss[loss=0.1581, simple_loss=0.2485, pruned_loss=0.03387, over 1925458.37 frames. ], batch size: 46, lr: 4.54e-03, grad_scale: 8.0 2023-03-09 18:24:23,465 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.77 vs. limit=2.0 2023-03-09 18:24:27,366 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2029, 5.1833, 4.8860, 5.1133, 5.1139, 4.6073, 5.0680, 4.7982], device='cuda:3'), covar=tensor([0.0433, 0.0497, 0.1315, 0.0773, 0.0613, 0.0428, 0.0433, 0.1144], device='cuda:3'), in_proj_covar=tensor([0.0506, 0.0575, 0.0717, 0.0441, 0.0471, 0.0525, 0.0557, 0.0689], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 18:24:49,029 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.772e+02 2.588e+02 3.217e+02 3.788e+02 6.268e+02, threshold=6.435e+02, percent-clipped=1.0 2023-03-09 18:25:08,130 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-09 18:25:12,510 INFO [train.py:898] (3/4) Epoch 25, batch 200, loss[loss=0.1694, simple_loss=0.2579, pruned_loss=0.04049, over 18386.00 frames. ], tot_loss[loss=0.1593, simple_loss=0.2497, pruned_loss=0.03445, over 2299733.86 frames. ], batch size: 50, lr: 4.54e-03, grad_scale: 8.0 2023-03-09 18:25:19,897 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87423.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:26:03,954 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87461.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:26:10,403 INFO [train.py:898] (3/4) Epoch 25, batch 250, loss[loss=0.1521, simple_loss=0.2393, pruned_loss=0.03247, over 18254.00 frames. ], tot_loss[loss=0.1593, simple_loss=0.2498, pruned_loss=0.0344, over 2563961.79 frames. ], batch size: 47, lr: 4.54e-03, grad_scale: 8.0 2023-03-09 18:26:14,884 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87470.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:26:41,742 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0015, 3.7500, 5.0920, 2.8145, 4.3718, 2.6660, 3.2373, 1.8178], device='cuda:3'), covar=tensor([0.1101, 0.0930, 0.0170, 0.1051, 0.0576, 0.2530, 0.2552, 0.2197], device='cuda:3'), in_proj_covar=tensor([0.0226, 0.0250, 0.0214, 0.0205, 0.0264, 0.0278, 0.0334, 0.0245], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 18:26:46,944 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.865e+02 2.586e+02 2.892e+02 3.324e+02 7.017e+02, threshold=5.784e+02, percent-clipped=1.0 2023-03-09 18:26:55,129 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87505.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:27:04,119 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6666, 3.3908, 2.4654, 4.3455, 3.1422, 4.1436, 2.4274, 3.9969], device='cuda:3'), covar=tensor([0.0591, 0.0946, 0.1331, 0.0475, 0.0849, 0.0361, 0.1274, 0.0409], device='cuda:3'), in_proj_covar=tensor([0.0219, 0.0228, 0.0194, 0.0292, 0.0196, 0.0270, 0.0205, 0.0206], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 18:27:09,410 INFO [train.py:898] (3/4) Epoch 25, batch 300, loss[loss=0.1576, simple_loss=0.2548, pruned_loss=0.03019, over 16144.00 frames. ], tot_loss[loss=0.1588, simple_loss=0.2491, pruned_loss=0.03424, over 2790886.10 frames. ], batch size: 95, lr: 4.54e-03, grad_scale: 8.0 2023-03-09 18:27:10,679 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=87518.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:27:16,115 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87522.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:27:29,481 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87534.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:27:43,591 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9685, 4.0234, 4.0425, 3.8449, 3.9254, 3.9020, 4.0616, 4.0822], device='cuda:3'), covar=tensor([0.0097, 0.0070, 0.0074, 0.0112, 0.0079, 0.0157, 0.0079, 0.0092], device='cuda:3'), in_proj_covar=tensor([0.0097, 0.0071, 0.0077, 0.0095, 0.0077, 0.0106, 0.0090, 0.0088], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 18:28:07,659 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87566.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 18:28:08,445 INFO [train.py:898] (3/4) Epoch 25, batch 350, loss[loss=0.1494, simple_loss=0.2403, pruned_loss=0.02929, over 18287.00 frames. ], tot_loss[loss=0.1593, simple_loss=0.2496, pruned_loss=0.03453, over 2960474.19 frames. ], batch size: 47, lr: 4.53e-03, grad_scale: 8.0 2023-03-09 18:28:26,404 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=87582.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:28:28,803 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87584.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:28:30,947 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87586.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:28:34,551 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87589.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:28:39,142 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5741, 3.3406, 2.3277, 4.3206, 3.0056, 4.0481, 2.3013, 3.9035], device='cuda:3'), covar=tensor([0.0690, 0.0935, 0.1590, 0.0559, 0.0953, 0.0354, 0.1421, 0.0423], device='cuda:3'), in_proj_covar=tensor([0.0220, 0.0229, 0.0195, 0.0293, 0.0196, 0.0271, 0.0206, 0.0206], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 18:28:41,914 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9331, 3.6608, 4.9828, 4.4756, 3.2950, 3.0546, 4.4013, 5.1836], device='cuda:3'), covar=tensor([0.0816, 0.1584, 0.0243, 0.0389, 0.1006, 0.1255, 0.0389, 0.0313], device='cuda:3'), in_proj_covar=tensor([0.0152, 0.0279, 0.0164, 0.0185, 0.0194, 0.0194, 0.0197, 0.0207], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 18:28:44,692 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.762e+02 2.689e+02 3.189e+02 3.817e+02 6.961e+02, threshold=6.379e+02, percent-clipped=1.0 2023-03-09 18:28:53,194 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87605.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:29:06,923 INFO [train.py:898] (3/4) Epoch 25, batch 400, loss[loss=0.1613, simple_loss=0.2503, pruned_loss=0.03617, over 18480.00 frames. ], tot_loss[loss=0.1591, simple_loss=0.2497, pruned_loss=0.03421, over 3100823.20 frames. ], batch size: 59, lr: 4.53e-03, grad_scale: 8.0 2023-03-09 18:29:40,406 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87645.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 18:29:46,561 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87650.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:29:55,212 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87658.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 18:30:04,938 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87666.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 18:30:05,659 INFO [train.py:898] (3/4) Epoch 25, batch 450, loss[loss=0.1632, simple_loss=0.2472, pruned_loss=0.03962, over 18482.00 frames. ], tot_loss[loss=0.1592, simple_loss=0.2497, pruned_loss=0.03432, over 3207126.67 frames. ], batch size: 51, lr: 4.53e-03, grad_scale: 8.0 2023-03-09 18:30:42,037 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.987e+02 2.570e+02 3.208e+02 4.155e+02 1.022e+03, threshold=6.417e+02, percent-clipped=4.0 2023-03-09 18:30:51,846 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=87706.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 18:31:03,994 INFO [train.py:898] (3/4) Epoch 25, batch 500, loss[loss=0.1516, simple_loss=0.2343, pruned_loss=0.03445, over 18506.00 frames. ], tot_loss[loss=0.1591, simple_loss=0.2496, pruned_loss=0.03431, over 3286516.70 frames. ], batch size: 44, lr: 4.53e-03, grad_scale: 8.0 2023-03-09 18:31:06,065 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87718.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:31:33,705 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-09 18:32:03,144 INFO [train.py:898] (3/4) Epoch 25, batch 550, loss[loss=0.1385, simple_loss=0.2227, pruned_loss=0.02714, over 18498.00 frames. ], tot_loss[loss=0.1589, simple_loss=0.2493, pruned_loss=0.0343, over 3353277.71 frames. ], batch size: 44, lr: 4.53e-03, grad_scale: 8.0 2023-03-09 18:32:39,324 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.934e+02 2.781e+02 3.111e+02 3.709e+02 9.396e+02, threshold=6.222e+02, percent-clipped=2.0 2023-03-09 18:32:39,741 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7434, 3.5336, 2.3621, 4.4474, 3.1747, 4.1908, 2.5805, 3.9536], device='cuda:3'), covar=tensor([0.0605, 0.0788, 0.1392, 0.0475, 0.0786, 0.0301, 0.1180, 0.0436], device='cuda:3'), in_proj_covar=tensor([0.0222, 0.0230, 0.0195, 0.0294, 0.0197, 0.0273, 0.0206, 0.0208], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 18:32:42,279 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7324, 3.5041, 2.3164, 4.3663, 3.1578, 4.0930, 2.4918, 3.8623], device='cuda:3'), covar=tensor([0.0558, 0.0735, 0.1429, 0.0465, 0.0766, 0.0375, 0.1190, 0.0454], device='cuda:3'), in_proj_covar=tensor([0.0222, 0.0230, 0.0196, 0.0294, 0.0197, 0.0273, 0.0206, 0.0208], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 18:33:02,207 INFO [train.py:898] (3/4) Epoch 25, batch 600, loss[loss=0.1627, simple_loss=0.2502, pruned_loss=0.03758, over 16898.00 frames. ], tot_loss[loss=0.1587, simple_loss=0.2491, pruned_loss=0.03414, over 3399569.20 frames. ], batch size: 78, lr: 4.53e-03, grad_scale: 8.0 2023-03-09 18:33:02,402 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87817.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:33:02,568 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9310, 5.4772, 2.8679, 5.2553, 5.1151, 5.4566, 5.2763, 2.7394], device='cuda:3'), covar=tensor([0.0227, 0.0050, 0.0725, 0.0083, 0.0067, 0.0063, 0.0075, 0.0980], device='cuda:3'), in_proj_covar=tensor([0.0090, 0.0082, 0.0097, 0.0097, 0.0088, 0.0077, 0.0085, 0.0097], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 18:33:14,471 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6292, 2.8271, 4.3926, 3.6631, 2.6459, 4.6034, 3.8894, 2.8374], device='cuda:3'), covar=tensor([0.0564, 0.1609, 0.0263, 0.0510, 0.1630, 0.0244, 0.0564, 0.1001], device='cuda:3'), in_proj_covar=tensor([0.0218, 0.0244, 0.0225, 0.0170, 0.0228, 0.0219, 0.0255, 0.0201], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 18:33:55,083 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87861.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 18:34:01,457 INFO [train.py:898] (3/4) Epoch 25, batch 650, loss[loss=0.1336, simple_loss=0.2178, pruned_loss=0.02471, over 18431.00 frames. ], tot_loss[loss=0.1577, simple_loss=0.2479, pruned_loss=0.03377, over 3450877.13 frames. ], batch size: 43, lr: 4.53e-03, grad_scale: 8.0 2023-03-09 18:34:24,008 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87886.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:34:38,402 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.755e+02 2.498e+02 2.822e+02 3.446e+02 7.564e+02, threshold=5.643e+02, percent-clipped=1.0 2023-03-09 18:35:00,457 INFO [train.py:898] (3/4) Epoch 25, batch 700, loss[loss=0.1662, simple_loss=0.2555, pruned_loss=0.0385, over 18144.00 frames. ], tot_loss[loss=0.1591, simple_loss=0.2495, pruned_loss=0.03434, over 3473083.16 frames. ], batch size: 62, lr: 4.53e-03, grad_scale: 8.0 2023-03-09 18:35:20,547 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=87934.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:35:28,943 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87940.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 18:35:34,730 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87945.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:35:52,402 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87961.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 18:35:59,572 INFO [train.py:898] (3/4) Epoch 25, batch 750, loss[loss=0.1466, simple_loss=0.2331, pruned_loss=0.03009, over 18590.00 frames. ], tot_loss[loss=0.1586, simple_loss=0.249, pruned_loss=0.03409, over 3498351.03 frames. ], batch size: 45, lr: 4.52e-03, grad_scale: 8.0 2023-03-09 18:36:35,767 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.893e+02 2.719e+02 3.180e+02 3.674e+02 1.066e+03, threshold=6.360e+02, percent-clipped=6.0 2023-03-09 18:36:48,004 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7477, 2.4493, 2.7591, 2.7746, 3.2916, 4.9097, 4.9357, 3.4597], device='cuda:3'), covar=tensor([0.1943, 0.2533, 0.3010, 0.1912, 0.2385, 0.0248, 0.0353, 0.0972], device='cuda:3'), in_proj_covar=tensor([0.0315, 0.0356, 0.0395, 0.0285, 0.0393, 0.0255, 0.0299, 0.0266], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 18:37:01,714 INFO [train.py:898] (3/4) Epoch 25, batch 800, loss[loss=0.1818, simple_loss=0.2789, pruned_loss=0.04238, over 18091.00 frames. ], tot_loss[loss=0.1582, simple_loss=0.2485, pruned_loss=0.03392, over 3527602.90 frames. ], batch size: 62, lr: 4.52e-03, grad_scale: 8.0 2023-03-09 18:37:03,774 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88018.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:37:58,660 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=88066.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:37:59,608 INFO [train.py:898] (3/4) Epoch 25, batch 850, loss[loss=0.1704, simple_loss=0.2728, pruned_loss=0.03406, over 17884.00 frames. ], tot_loss[loss=0.1589, simple_loss=0.2492, pruned_loss=0.03429, over 3548315.18 frames. ], batch size: 70, lr: 4.52e-03, grad_scale: 8.0 2023-03-09 18:38:35,403 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.735e+02 2.499e+02 3.015e+02 3.591e+02 1.108e+03, threshold=6.031e+02, percent-clipped=1.0 2023-03-09 18:38:39,110 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=88100.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:38:58,092 INFO [train.py:898] (3/4) Epoch 25, batch 900, loss[loss=0.211, simple_loss=0.2895, pruned_loss=0.06628, over 12681.00 frames. ], tot_loss[loss=0.1594, simple_loss=0.2502, pruned_loss=0.03429, over 3556926.33 frames. ], batch size: 130, lr: 4.52e-03, grad_scale: 8.0 2023-03-09 18:38:58,468 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88117.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:39:20,817 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-09 18:39:42,913 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-09 18:39:50,548 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88161.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 18:39:50,593 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=88161.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:39:52,817 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3647, 5.9215, 5.4558, 5.7530, 5.5617, 5.3943, 6.0161, 5.9643], device='cuda:3'), covar=tensor([0.1243, 0.0807, 0.0532, 0.0721, 0.1397, 0.0719, 0.0570, 0.0705], device='cuda:3'), in_proj_covar=tensor([0.0627, 0.0546, 0.0398, 0.0572, 0.0770, 0.0569, 0.0784, 0.0600], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 18:39:55,052 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=88165.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:39:57,058 INFO [train.py:898] (3/4) Epoch 25, batch 950, loss[loss=0.1608, simple_loss=0.2534, pruned_loss=0.03404, over 18306.00 frames. ], tot_loss[loss=0.1593, simple_loss=0.2499, pruned_loss=0.03431, over 3563359.29 frames. ], batch size: 54, lr: 4.52e-03, grad_scale: 8.0 2023-03-09 18:40:03,350 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-09 18:40:14,317 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.77 vs. limit=5.0 2023-03-09 18:40:33,070 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.747e+02 2.618e+02 3.009e+02 3.619e+02 7.498e+02, threshold=6.018e+02, percent-clipped=2.0 2023-03-09 18:40:46,691 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.69 vs. limit=5.0 2023-03-09 18:40:47,076 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=88209.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:40:55,866 INFO [train.py:898] (3/4) Epoch 25, batch 1000, loss[loss=0.1869, simple_loss=0.2781, pruned_loss=0.04779, over 12599.00 frames. ], tot_loss[loss=0.1605, simple_loss=0.251, pruned_loss=0.03496, over 3554172.78 frames. ], batch size: 130, lr: 4.52e-03, grad_scale: 8.0 2023-03-09 18:41:22,560 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88240.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:41:28,342 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88245.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:41:43,531 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8764, 3.7055, 5.0152, 4.4430, 3.3957, 3.0493, 4.4473, 5.2943], device='cuda:3'), covar=tensor([0.0784, 0.1628, 0.0241, 0.0412, 0.0956, 0.1265, 0.0422, 0.0257], device='cuda:3'), in_proj_covar=tensor([0.0153, 0.0284, 0.0166, 0.0187, 0.0196, 0.0196, 0.0199, 0.0209], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 18:41:48,017 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88261.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 18:41:48,161 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5027, 2.3385, 4.3256, 3.8186, 2.1270, 4.4739, 3.7978, 2.6327], device='cuda:3'), covar=tensor([0.0511, 0.2074, 0.0287, 0.0390, 0.2295, 0.0291, 0.0593, 0.1369], device='cuda:3'), in_proj_covar=tensor([0.0218, 0.0245, 0.0227, 0.0170, 0.0228, 0.0219, 0.0257, 0.0200], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 18:41:54,317 INFO [train.py:898] (3/4) Epoch 25, batch 1050, loss[loss=0.1595, simple_loss=0.2538, pruned_loss=0.03257, over 17199.00 frames. ], tot_loss[loss=0.1607, simple_loss=0.251, pruned_loss=0.0352, over 3552502.81 frames. ], batch size: 78, lr: 4.52e-03, grad_scale: 8.0 2023-03-09 18:42:18,484 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=88288.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:42:22,029 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=88291.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 18:42:24,174 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=88293.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:42:28,827 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6224, 2.9270, 4.4096, 3.7448, 2.5052, 4.6228, 3.8714, 2.8226], device='cuda:3'), covar=tensor([0.0518, 0.1462, 0.0277, 0.0452, 0.1766, 0.0220, 0.0608, 0.1028], device='cuda:3'), in_proj_covar=tensor([0.0219, 0.0246, 0.0227, 0.0171, 0.0229, 0.0220, 0.0258, 0.0201], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 18:42:29,463 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.898e+02 2.574e+02 3.147e+02 3.590e+02 6.876e+02, threshold=6.293e+02, percent-clipped=1.0 2023-03-09 18:42:42,896 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=88309.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:42:52,751 INFO [train.py:898] (3/4) Epoch 25, batch 1100, loss[loss=0.1408, simple_loss=0.2241, pruned_loss=0.02878, over 18522.00 frames. ], tot_loss[loss=0.1597, simple_loss=0.2498, pruned_loss=0.03483, over 3550672.16 frames. ], batch size: 47, lr: 4.52e-03, grad_scale: 8.0 2023-03-09 18:42:54,189 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3809, 5.3517, 4.9984, 5.2721, 5.2943, 4.7137, 5.2180, 4.9369], device='cuda:3'), covar=tensor([0.0442, 0.0440, 0.1257, 0.0866, 0.0584, 0.0447, 0.0417, 0.1133], device='cuda:3'), in_proj_covar=tensor([0.0507, 0.0574, 0.0720, 0.0447, 0.0470, 0.0524, 0.0560, 0.0694], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 18:43:32,782 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=88352.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 18:43:36,687 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8943, 5.2731, 5.2381, 5.3610, 4.7131, 5.2496, 4.2208, 5.2454], device='cuda:3'), covar=tensor([0.0272, 0.0399, 0.0283, 0.0444, 0.0494, 0.0265, 0.1804, 0.0349], device='cuda:3'), in_proj_covar=tensor([0.0225, 0.0271, 0.0267, 0.0346, 0.0282, 0.0278, 0.0313, 0.0270], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0008, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 18:43:37,784 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9291, 3.8537, 3.9605, 3.7327, 3.7836, 3.7599, 3.9640, 4.0014], device='cuda:3'), covar=tensor([0.0115, 0.0120, 0.0107, 0.0143, 0.0107, 0.0191, 0.0132, 0.0135], device='cuda:3'), in_proj_covar=tensor([0.0098, 0.0072, 0.0078, 0.0097, 0.0078, 0.0108, 0.0090, 0.0089], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 18:43:50,013 INFO [train.py:898] (3/4) Epoch 25, batch 1150, loss[loss=0.1526, simple_loss=0.2487, pruned_loss=0.02823, over 18535.00 frames. ], tot_loss[loss=0.1595, simple_loss=0.2495, pruned_loss=0.03476, over 3566892.19 frames. ], batch size: 49, lr: 4.51e-03, grad_scale: 8.0 2023-03-09 18:43:57,588 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7305, 3.6885, 3.6052, 3.1276, 3.4179, 2.5780, 2.6007, 3.7640], device='cuda:3'), covar=tensor([0.0067, 0.0109, 0.0081, 0.0168, 0.0114, 0.0265, 0.0295, 0.0069], device='cuda:3'), in_proj_covar=tensor([0.0151, 0.0170, 0.0143, 0.0195, 0.0151, 0.0185, 0.0190, 0.0129], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-09 18:43:59,061 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.54 vs. limit=2.0 2023-03-09 18:44:25,857 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.694e+02 2.731e+02 3.095e+02 3.646e+02 6.544e+02, threshold=6.189e+02, percent-clipped=1.0 2023-03-09 18:44:26,174 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5221, 6.0681, 5.5561, 5.8265, 5.6162, 5.4737, 6.1049, 6.0273], device='cuda:3'), covar=tensor([0.1204, 0.0655, 0.0434, 0.0702, 0.1327, 0.0731, 0.0580, 0.0744], device='cuda:3'), in_proj_covar=tensor([0.0627, 0.0544, 0.0398, 0.0575, 0.0771, 0.0569, 0.0784, 0.0598], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 18:44:45,267 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7061, 2.4343, 2.6138, 2.7045, 3.2360, 4.8333, 4.8080, 3.1822], device='cuda:3'), covar=tensor([0.1925, 0.2480, 0.3133, 0.1942, 0.2499, 0.0254, 0.0357, 0.1129], device='cuda:3'), in_proj_covar=tensor([0.0317, 0.0357, 0.0397, 0.0286, 0.0394, 0.0255, 0.0300, 0.0266], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 18:44:47,500 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6740, 2.5448, 2.5035, 2.6620, 3.0048, 3.7163, 3.7320, 3.0131], device='cuda:3'), covar=tensor([0.1870, 0.2338, 0.2916, 0.1855, 0.2246, 0.0539, 0.0578, 0.0990], device='cuda:3'), in_proj_covar=tensor([0.0317, 0.0357, 0.0397, 0.0286, 0.0394, 0.0255, 0.0300, 0.0266], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 18:44:48,677 INFO [train.py:898] (3/4) Epoch 25, batch 1200, loss[loss=0.191, simple_loss=0.2726, pruned_loss=0.05468, over 13248.00 frames. ], tot_loss[loss=0.1594, simple_loss=0.2495, pruned_loss=0.03468, over 3554015.64 frames. ], batch size: 129, lr: 4.51e-03, grad_scale: 8.0 2023-03-09 18:45:21,062 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-09 18:45:25,413 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-09 18:45:34,632 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=88456.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:45:46,403 INFO [train.py:898] (3/4) Epoch 25, batch 1250, loss[loss=0.1612, simple_loss=0.2516, pruned_loss=0.03545, over 18267.00 frames. ], tot_loss[loss=0.1596, simple_loss=0.2496, pruned_loss=0.03481, over 3572504.29 frames. ], batch size: 49, lr: 4.51e-03, grad_scale: 16.0 2023-03-09 18:46:22,889 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.846e+02 2.624e+02 3.040e+02 3.727e+02 1.203e+03, threshold=6.079e+02, percent-clipped=2.0 2023-03-09 18:46:42,858 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6582, 2.3792, 2.5686, 2.7068, 3.2740, 4.8027, 4.6917, 3.2941], device='cuda:3'), covar=tensor([0.1984, 0.2516, 0.3097, 0.1893, 0.2355, 0.0236, 0.0383, 0.1019], device='cuda:3'), in_proj_covar=tensor([0.0317, 0.0357, 0.0397, 0.0286, 0.0394, 0.0255, 0.0300, 0.0266], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 18:46:44,554 INFO [train.py:898] (3/4) Epoch 25, batch 1300, loss[loss=0.1838, simple_loss=0.2754, pruned_loss=0.04609, over 18483.00 frames. ], tot_loss[loss=0.1597, simple_loss=0.25, pruned_loss=0.03467, over 3578164.35 frames. ], batch size: 59, lr: 4.51e-03, grad_scale: 16.0 2023-03-09 18:47:42,861 INFO [train.py:898] (3/4) Epoch 25, batch 1350, loss[loss=0.1544, simple_loss=0.2469, pruned_loss=0.03099, over 17769.00 frames. ], tot_loss[loss=0.1594, simple_loss=0.2497, pruned_loss=0.03454, over 3578501.73 frames. ], batch size: 70, lr: 4.51e-03, grad_scale: 16.0 2023-03-09 18:48:19,809 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.775e+02 2.523e+02 2.984e+02 3.616e+02 6.843e+02, threshold=5.967e+02, percent-clipped=1.0 2023-03-09 18:48:41,212 INFO [train.py:898] (3/4) Epoch 25, batch 1400, loss[loss=0.1825, simple_loss=0.2717, pruned_loss=0.04668, over 17737.00 frames. ], tot_loss[loss=0.1594, simple_loss=0.2496, pruned_loss=0.03461, over 3579611.11 frames. ], batch size: 70, lr: 4.51e-03, grad_scale: 16.0 2023-03-09 18:48:44,793 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-09 18:49:17,090 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=88647.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 18:49:36,459 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6537, 2.3858, 2.6013, 2.8202, 3.2734, 4.9036, 4.7450, 3.3659], device='cuda:3'), covar=tensor([0.1955, 0.2521, 0.3193, 0.1822, 0.2363, 0.0254, 0.0355, 0.1008], device='cuda:3'), in_proj_covar=tensor([0.0316, 0.0356, 0.0396, 0.0286, 0.0393, 0.0255, 0.0300, 0.0266], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 18:49:39,309 INFO [train.py:898] (3/4) Epoch 25, batch 1450, loss[loss=0.1428, simple_loss=0.2271, pruned_loss=0.02931, over 18188.00 frames. ], tot_loss[loss=0.1588, simple_loss=0.249, pruned_loss=0.03428, over 3582026.98 frames. ], batch size: 44, lr: 4.51e-03, grad_scale: 8.0 2023-03-09 18:49:50,605 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6293, 3.6125, 3.4429, 3.0910, 3.3223, 2.6710, 2.6373, 3.5980], device='cuda:3'), covar=tensor([0.0067, 0.0104, 0.0091, 0.0151, 0.0113, 0.0216, 0.0228, 0.0078], device='cuda:3'), in_proj_covar=tensor([0.0148, 0.0168, 0.0141, 0.0193, 0.0150, 0.0183, 0.0188, 0.0128], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 18:50:17,346 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.835e+02 2.449e+02 2.886e+02 3.614e+02 7.165e+02, threshold=5.771e+02, percent-clipped=1.0 2023-03-09 18:50:37,471 INFO [train.py:898] (3/4) Epoch 25, batch 1500, loss[loss=0.1632, simple_loss=0.2545, pruned_loss=0.03599, over 18370.00 frames. ], tot_loss[loss=0.1585, simple_loss=0.2489, pruned_loss=0.03409, over 3592467.93 frames. ], batch size: 50, lr: 4.51e-03, grad_scale: 8.0 2023-03-09 18:50:54,443 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-09 18:50:59,847 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0924, 5.0832, 4.7134, 5.0152, 5.0375, 4.4339, 4.9400, 4.6624], device='cuda:3'), covar=tensor([0.0480, 0.0468, 0.1276, 0.0805, 0.0564, 0.0482, 0.0477, 0.1151], device='cuda:3'), in_proj_covar=tensor([0.0512, 0.0579, 0.0722, 0.0450, 0.0475, 0.0527, 0.0566, 0.0701], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 18:51:23,452 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88756.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:51:35,259 INFO [train.py:898] (3/4) Epoch 25, batch 1550, loss[loss=0.1706, simple_loss=0.2733, pruned_loss=0.03393, over 17088.00 frames. ], tot_loss[loss=0.159, simple_loss=0.2495, pruned_loss=0.03428, over 3583337.90 frames. ], batch size: 78, lr: 4.50e-03, grad_scale: 8.0 2023-03-09 18:52:13,194 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.757e+02 2.636e+02 3.082e+02 3.617e+02 8.587e+02, threshold=6.165e+02, percent-clipped=5.0 2023-03-09 18:52:20,078 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=88804.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:52:34,355 INFO [train.py:898] (3/4) Epoch 25, batch 1600, loss[loss=0.1619, simple_loss=0.2579, pruned_loss=0.03298, over 18351.00 frames. ], tot_loss[loss=0.159, simple_loss=0.2494, pruned_loss=0.03425, over 3572788.17 frames. ], batch size: 55, lr: 4.50e-03, grad_scale: 8.0 2023-03-09 18:53:32,608 INFO [train.py:898] (3/4) Epoch 25, batch 1650, loss[loss=0.1578, simple_loss=0.2494, pruned_loss=0.03312, over 18476.00 frames. ], tot_loss[loss=0.159, simple_loss=0.2496, pruned_loss=0.03426, over 3578719.93 frames. ], batch size: 59, lr: 4.50e-03, grad_scale: 8.0 2023-03-09 18:54:09,628 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.772e+02 2.755e+02 3.165e+02 3.755e+02 1.372e+03, threshold=6.330e+02, percent-clipped=6.0 2023-03-09 18:54:11,715 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.52 vs. limit=2.0 2023-03-09 18:54:30,561 INFO [train.py:898] (3/4) Epoch 25, batch 1700, loss[loss=0.1432, simple_loss=0.2285, pruned_loss=0.02895, over 17662.00 frames. ], tot_loss[loss=0.1594, simple_loss=0.25, pruned_loss=0.03438, over 3572383.36 frames. ], batch size: 39, lr: 4.50e-03, grad_scale: 8.0 2023-03-09 18:55:04,758 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88947.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 18:55:28,681 INFO [train.py:898] (3/4) Epoch 25, batch 1750, loss[loss=0.1503, simple_loss=0.2498, pruned_loss=0.0254, over 18625.00 frames. ], tot_loss[loss=0.1586, simple_loss=0.2492, pruned_loss=0.03398, over 3586788.62 frames. ], batch size: 52, lr: 4.50e-03, grad_scale: 8.0 2023-03-09 18:56:00,682 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=88995.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 18:56:04,901 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.658e+02 2.478e+02 2.880e+02 3.518e+02 7.008e+02, threshold=5.760e+02, percent-clipped=1.0 2023-03-09 18:56:12,427 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-09 18:56:27,123 INFO [train.py:898] (3/4) Epoch 25, batch 1800, loss[loss=0.196, simple_loss=0.2825, pruned_loss=0.05479, over 16172.00 frames. ], tot_loss[loss=0.1584, simple_loss=0.2492, pruned_loss=0.03381, over 3591363.42 frames. ], batch size: 94, lr: 4.50e-03, grad_scale: 4.0 2023-03-09 18:56:38,793 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89027.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:57:13,885 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8994, 4.1084, 2.4926, 4.1247, 5.2063, 2.7667, 3.8518, 4.0496], device='cuda:3'), covar=tensor([0.0215, 0.1209, 0.1684, 0.0667, 0.0106, 0.1163, 0.0698, 0.0668], device='cuda:3'), in_proj_covar=tensor([0.0178, 0.0278, 0.0207, 0.0202, 0.0136, 0.0187, 0.0221, 0.0230], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 18:57:25,678 INFO [train.py:898] (3/4) Epoch 25, batch 1850, loss[loss=0.1589, simple_loss=0.2504, pruned_loss=0.03369, over 17998.00 frames. ], tot_loss[loss=0.1586, simple_loss=0.2493, pruned_loss=0.034, over 3579303.43 frames. ], batch size: 65, lr: 4.50e-03, grad_scale: 4.0 2023-03-09 18:57:49,980 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89088.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 18:58:03,153 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.649e+02 2.662e+02 3.134e+02 3.843e+02 1.322e+03, threshold=6.269e+02, percent-clipped=2.0 2023-03-09 18:58:04,673 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8232, 3.1922, 4.5895, 3.8480, 3.0338, 4.8180, 4.0748, 3.2359], device='cuda:3'), covar=tensor([0.0434, 0.1289, 0.0270, 0.0427, 0.1370, 0.0211, 0.0521, 0.0833], device='cuda:3'), in_proj_covar=tensor([0.0218, 0.0243, 0.0228, 0.0169, 0.0228, 0.0220, 0.0256, 0.0200], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 18:58:23,391 INFO [train.py:898] (3/4) Epoch 25, batch 1900, loss[loss=0.1445, simple_loss=0.2318, pruned_loss=0.02854, over 18478.00 frames. ], tot_loss[loss=0.1588, simple_loss=0.2495, pruned_loss=0.03408, over 3584715.48 frames. ], batch size: 44, lr: 4.50e-03, grad_scale: 4.0 2023-03-09 18:58:58,065 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.46 vs. limit=2.0 2023-03-09 18:59:08,266 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2615, 5.2945, 5.3453, 5.0955, 5.0679, 5.1910, 5.4200, 5.4598], device='cuda:3'), covar=tensor([0.0063, 0.0048, 0.0051, 0.0090, 0.0052, 0.0128, 0.0058, 0.0064], device='cuda:3'), in_proj_covar=tensor([0.0097, 0.0072, 0.0078, 0.0096, 0.0077, 0.0106, 0.0089, 0.0088], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 18:59:18,567 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8814, 3.4378, 2.6200, 3.2984, 4.0599, 2.5518, 3.3466, 3.3833], device='cuda:3'), covar=tensor([0.0282, 0.0859, 0.1297, 0.0675, 0.0165, 0.1114, 0.0678, 0.0706], device='cuda:3'), in_proj_covar=tensor([0.0176, 0.0276, 0.0205, 0.0200, 0.0135, 0.0186, 0.0220, 0.0228], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 18:59:22,369 INFO [train.py:898] (3/4) Epoch 25, batch 1950, loss[loss=0.1605, simple_loss=0.2549, pruned_loss=0.03299, over 18241.00 frames. ], tot_loss[loss=0.1587, simple_loss=0.249, pruned_loss=0.03423, over 3579256.85 frames. ], batch size: 47, lr: 4.49e-03, grad_scale: 4.0 2023-03-09 18:59:25,454 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89169.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:00:00,687 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.906e+02 2.686e+02 3.109e+02 3.749e+02 7.120e+02, threshold=6.217e+02, percent-clipped=3.0 2023-03-09 19:00:09,077 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3017, 5.2640, 4.9233, 5.1946, 5.1795, 4.6440, 5.1270, 4.8768], device='cuda:3'), covar=tensor([0.0452, 0.0476, 0.1359, 0.0781, 0.0635, 0.0484, 0.0409, 0.1091], device='cuda:3'), in_proj_covar=tensor([0.0516, 0.0581, 0.0723, 0.0452, 0.0474, 0.0527, 0.0566, 0.0703], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 19:00:20,481 INFO [train.py:898] (3/4) Epoch 25, batch 2000, loss[loss=0.1665, simple_loss=0.2618, pruned_loss=0.0356, over 18341.00 frames. ], tot_loss[loss=0.1585, simple_loss=0.2487, pruned_loss=0.03411, over 3589742.56 frames. ], batch size: 55, lr: 4.49e-03, grad_scale: 8.0 2023-03-09 19:00:36,504 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89230.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:01:18,544 INFO [train.py:898] (3/4) Epoch 25, batch 2050, loss[loss=0.1625, simple_loss=0.2467, pruned_loss=0.03911, over 12636.00 frames. ], tot_loss[loss=0.1593, simple_loss=0.2496, pruned_loss=0.03452, over 3587628.82 frames. ], batch size: 130, lr: 4.49e-03, grad_scale: 8.0 2023-03-09 19:01:43,002 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.6189, 5.5504, 5.2226, 5.5897, 5.5093, 4.9297, 5.4436, 5.1441], device='cuda:3'), covar=tensor([0.0410, 0.0415, 0.1250, 0.0670, 0.0576, 0.0410, 0.0384, 0.1034], device='cuda:3'), in_proj_covar=tensor([0.0514, 0.0578, 0.0718, 0.0450, 0.0472, 0.0524, 0.0563, 0.0698], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 19:01:57,828 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.761e+02 2.663e+02 3.151e+02 3.828e+02 7.716e+02, threshold=6.301e+02, percent-clipped=2.0 2023-03-09 19:02:17,078 INFO [train.py:898] (3/4) Epoch 25, batch 2100, loss[loss=0.158, simple_loss=0.2562, pruned_loss=0.0299, over 16243.00 frames. ], tot_loss[loss=0.1586, simple_loss=0.249, pruned_loss=0.03415, over 3596056.79 frames. ], batch size: 94, lr: 4.49e-03, grad_scale: 8.0 2023-03-09 19:02:28,141 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6911, 2.3570, 2.6666, 2.7200, 3.1401, 4.8118, 4.6932, 3.2844], device='cuda:3'), covar=tensor([0.1970, 0.2527, 0.2828, 0.1896, 0.2487, 0.0239, 0.0357, 0.1034], device='cuda:3'), in_proj_covar=tensor([0.0319, 0.0357, 0.0399, 0.0287, 0.0394, 0.0257, 0.0302, 0.0266], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 19:02:50,060 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.61 vs. limit=5.0 2023-03-09 19:03:15,258 INFO [train.py:898] (3/4) Epoch 25, batch 2150, loss[loss=0.1544, simple_loss=0.2434, pruned_loss=0.0327, over 18369.00 frames. ], tot_loss[loss=0.1588, simple_loss=0.249, pruned_loss=0.03429, over 3591034.11 frames. ], batch size: 46, lr: 4.49e-03, grad_scale: 8.0 2023-03-09 19:03:24,306 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89374.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:03:35,665 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=89383.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:03:38,330 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6498, 4.0463, 2.3440, 3.9014, 4.9830, 2.5304, 3.7438, 3.8784], device='cuda:3'), covar=tensor([0.0220, 0.1109, 0.1660, 0.0647, 0.0102, 0.1209, 0.0701, 0.0706], device='cuda:3'), in_proj_covar=tensor([0.0177, 0.0277, 0.0207, 0.0201, 0.0136, 0.0187, 0.0221, 0.0230], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 19:03:54,955 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.788e+02 2.675e+02 3.172e+02 3.693e+02 8.174e+02, threshold=6.344e+02, percent-clipped=3.0 2023-03-09 19:04:15,089 INFO [train.py:898] (3/4) Epoch 25, batch 2200, loss[loss=0.1453, simple_loss=0.2314, pruned_loss=0.0296, over 18354.00 frames. ], tot_loss[loss=0.1588, simple_loss=0.2489, pruned_loss=0.03433, over 3581268.24 frames. ], batch size: 46, lr: 4.49e-03, grad_scale: 8.0 2023-03-09 19:04:25,087 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89425.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:04:37,522 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89435.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:04:41,912 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89439.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 19:04:46,719 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.81 vs. limit=2.0 2023-03-09 19:05:13,787 INFO [train.py:898] (3/4) Epoch 25, batch 2250, loss[loss=0.1747, simple_loss=0.2745, pruned_loss=0.03744, over 17794.00 frames. ], tot_loss[loss=0.1594, simple_loss=0.2498, pruned_loss=0.03449, over 3579169.92 frames. ], batch size: 70, lr: 4.49e-03, grad_scale: 8.0 2023-03-09 19:05:38,359 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89486.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:05:53,611 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.616e+02 2.577e+02 2.992e+02 3.589e+02 8.023e+02, threshold=5.985e+02, percent-clipped=1.0 2023-03-09 19:05:53,943 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89500.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 19:06:04,242 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89509.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:06:12,816 INFO [train.py:898] (3/4) Epoch 25, batch 2300, loss[loss=0.1716, simple_loss=0.2601, pruned_loss=0.04161, over 18484.00 frames. ], tot_loss[loss=0.16, simple_loss=0.2505, pruned_loss=0.03477, over 3568967.03 frames. ], batch size: 51, lr: 4.49e-03, grad_scale: 8.0 2023-03-09 19:06:20,041 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9320, 5.4424, 2.9590, 5.2203, 5.1326, 5.4415, 5.2589, 2.7443], device='cuda:3'), covar=tensor([0.0226, 0.0078, 0.0719, 0.0075, 0.0078, 0.0067, 0.0088, 0.0955], device='cuda:3'), in_proj_covar=tensor([0.0092, 0.0083, 0.0098, 0.0098, 0.0089, 0.0079, 0.0086, 0.0098], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 19:06:22,691 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=89525.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:06:47,688 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5755, 6.0654, 5.6131, 5.9769, 5.6727, 5.5664, 6.1693, 6.0992], device='cuda:3'), covar=tensor([0.1182, 0.0767, 0.0492, 0.0624, 0.1437, 0.0704, 0.0539, 0.0664], device='cuda:3'), in_proj_covar=tensor([0.0625, 0.0551, 0.0400, 0.0578, 0.0773, 0.0572, 0.0788, 0.0595], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 19:06:53,422 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7260, 3.0280, 2.6576, 2.9856, 3.7359, 3.6380, 3.2442, 2.9479], device='cuda:3'), covar=tensor([0.0181, 0.0292, 0.0521, 0.0379, 0.0183, 0.0149, 0.0362, 0.0418], device='cuda:3'), in_proj_covar=tensor([0.0143, 0.0142, 0.0166, 0.0162, 0.0137, 0.0123, 0.0159, 0.0161], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 19:06:54,520 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5238, 2.8707, 2.4748, 2.8548, 3.5806, 3.4384, 3.1432, 2.8148], device='cuda:3'), covar=tensor([0.0178, 0.0266, 0.0578, 0.0376, 0.0185, 0.0153, 0.0337, 0.0401], device='cuda:3'), in_proj_covar=tensor([0.0143, 0.0142, 0.0165, 0.0162, 0.0137, 0.0123, 0.0159, 0.0161], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 19:07:11,313 INFO [train.py:898] (3/4) Epoch 25, batch 2350, loss[loss=0.1307, simple_loss=0.2126, pruned_loss=0.02441, over 18457.00 frames. ], tot_loss[loss=0.1601, simple_loss=0.2507, pruned_loss=0.03479, over 3570804.06 frames. ], batch size: 43, lr: 4.48e-03, grad_scale: 8.0 2023-03-09 19:07:15,052 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89570.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:07:36,850 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7903, 3.8471, 3.6355, 3.2957, 3.5353, 2.9667, 3.0566, 3.8734], device='cuda:3'), covar=tensor([0.0071, 0.0090, 0.0082, 0.0134, 0.0105, 0.0187, 0.0189, 0.0061], device='cuda:3'), in_proj_covar=tensor([0.0150, 0.0168, 0.0140, 0.0193, 0.0149, 0.0183, 0.0186, 0.0127], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 19:07:38,010 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6496, 2.3085, 2.5485, 2.6962, 3.1690, 4.7285, 4.6419, 3.1803], device='cuda:3'), covar=tensor([0.2040, 0.2661, 0.3080, 0.1986, 0.2511, 0.0278, 0.0380, 0.1134], device='cuda:3'), in_proj_covar=tensor([0.0321, 0.0359, 0.0401, 0.0289, 0.0396, 0.0258, 0.0303, 0.0268], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:3') 2023-03-09 19:07:43,501 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1180, 5.1387, 5.2443, 4.9642, 4.9538, 4.9685, 5.3193, 5.3453], device='cuda:3'), covar=tensor([0.0061, 0.0065, 0.0052, 0.0103, 0.0057, 0.0152, 0.0070, 0.0079], device='cuda:3'), in_proj_covar=tensor([0.0098, 0.0073, 0.0078, 0.0097, 0.0078, 0.0108, 0.0090, 0.0089], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 19:07:51,625 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.602e+02 2.634e+02 3.010e+02 3.571e+02 1.034e+03, threshold=6.019e+02, percent-clipped=2.0 2023-03-09 19:08:09,878 INFO [train.py:898] (3/4) Epoch 25, batch 2400, loss[loss=0.147, simple_loss=0.235, pruned_loss=0.02954, over 18245.00 frames. ], tot_loss[loss=0.1594, simple_loss=0.2498, pruned_loss=0.03451, over 3575015.22 frames. ], batch size: 45, lr: 4.48e-03, grad_scale: 8.0 2023-03-09 19:09:07,613 INFO [train.py:898] (3/4) Epoch 25, batch 2450, loss[loss=0.1682, simple_loss=0.2647, pruned_loss=0.03582, over 18372.00 frames. ], tot_loss[loss=0.1591, simple_loss=0.2494, pruned_loss=0.03436, over 3584768.62 frames. ], batch size: 55, lr: 4.48e-03, grad_scale: 4.0 2023-03-09 19:09:23,500 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89680.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:09:26,964 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=89683.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:09:49,622 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.687e+02 2.547e+02 3.016e+02 3.415e+02 5.579e+02, threshold=6.031e+02, percent-clipped=0.0 2023-03-09 19:10:02,396 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4966, 4.4202, 4.5727, 4.2506, 4.2262, 4.4514, 4.6316, 4.5281], device='cuda:3'), covar=tensor([0.0111, 0.0119, 0.0110, 0.0174, 0.0125, 0.0200, 0.0116, 0.0168], device='cuda:3'), in_proj_covar=tensor([0.0099, 0.0074, 0.0079, 0.0098, 0.0078, 0.0108, 0.0091, 0.0089], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 19:10:06,525 INFO [train.py:898] (3/4) Epoch 25, batch 2500, loss[loss=0.1411, simple_loss=0.2242, pruned_loss=0.02906, over 18451.00 frames. ], tot_loss[loss=0.1595, simple_loss=0.2499, pruned_loss=0.03455, over 3580433.93 frames. ], batch size: 43, lr: 4.48e-03, grad_scale: 4.0 2023-03-09 19:10:21,784 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=89730.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:10:22,826 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=89731.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:10:35,032 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89741.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:11:04,384 INFO [train.py:898] (3/4) Epoch 25, batch 2550, loss[loss=0.1663, simple_loss=0.2558, pruned_loss=0.03845, over 18253.00 frames. ], tot_loss[loss=0.1591, simple_loss=0.2496, pruned_loss=0.03424, over 3583840.43 frames. ], batch size: 60, lr: 4.48e-03, grad_scale: 4.0 2023-03-09 19:11:04,682 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4128, 4.8226, 4.4707, 4.7478, 4.5479, 4.4380, 4.9332, 4.8237], device='cuda:3'), covar=tensor([0.1174, 0.0825, 0.1484, 0.0682, 0.1363, 0.0764, 0.0706, 0.0786], device='cuda:3'), in_proj_covar=tensor([0.0620, 0.0549, 0.0397, 0.0575, 0.0766, 0.0569, 0.0782, 0.0592], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 19:11:20,656 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=89781.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:11:28,776 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8050, 3.0752, 2.6448, 3.0357, 3.8034, 3.7908, 3.2665, 3.1069], device='cuda:3'), covar=tensor([0.0175, 0.0271, 0.0606, 0.0386, 0.0161, 0.0161, 0.0357, 0.0354], device='cuda:3'), in_proj_covar=tensor([0.0142, 0.0142, 0.0166, 0.0163, 0.0138, 0.0123, 0.0159, 0.0161], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 19:11:37,072 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=89795.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 19:11:41,941 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-09 19:11:45,423 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.598e+02 2.460e+02 2.989e+02 3.597e+02 5.800e+02, threshold=5.978e+02, percent-clipped=0.0 2023-03-09 19:12:03,001 INFO [train.py:898] (3/4) Epoch 25, batch 2600, loss[loss=0.1538, simple_loss=0.2466, pruned_loss=0.0305, over 18266.00 frames. ], tot_loss[loss=0.1589, simple_loss=0.2494, pruned_loss=0.03424, over 3575422.13 frames. ], batch size: 57, lr: 4.48e-03, grad_scale: 4.0 2023-03-09 19:12:13,008 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=89825.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:12:59,272 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=89865.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:12:59,484 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89865.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:13:01,342 INFO [train.py:898] (3/4) Epoch 25, batch 2650, loss[loss=0.1438, simple_loss=0.2403, pruned_loss=0.02365, over 18289.00 frames. ], tot_loss[loss=0.1587, simple_loss=0.2492, pruned_loss=0.03414, over 3585608.47 frames. ], batch size: 49, lr: 4.48e-03, grad_scale: 4.0 2023-03-09 19:13:08,337 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=89873.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:13:41,748 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.822e+02 2.660e+02 3.253e+02 3.912e+02 1.582e+03, threshold=6.506e+02, percent-clipped=3.0 2023-03-09 19:13:54,770 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-09 19:13:59,689 INFO [train.py:898] (3/4) Epoch 25, batch 2700, loss[loss=0.1354, simple_loss=0.224, pruned_loss=0.02339, over 18423.00 frames. ], tot_loss[loss=0.158, simple_loss=0.2487, pruned_loss=0.03369, over 3593935.03 frames. ], batch size: 43, lr: 4.48e-03, grad_scale: 4.0 2023-03-09 19:14:10,129 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89926.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 19:14:39,862 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-09 19:14:58,063 INFO [train.py:898] (3/4) Epoch 25, batch 2750, loss[loss=0.1509, simple_loss=0.2419, pruned_loss=0.02999, over 18486.00 frames. ], tot_loss[loss=0.1584, simple_loss=0.2486, pruned_loss=0.03411, over 3589004.65 frames. ], batch size: 51, lr: 4.47e-03, grad_scale: 4.0 2023-03-09 19:15:43,123 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.861e+02 2.579e+02 2.972e+02 3.453e+02 7.499e+02, threshold=5.944e+02, percent-clipped=1.0 2023-03-09 19:16:00,867 INFO [train.py:898] (3/4) Epoch 25, batch 2800, loss[loss=0.1377, simple_loss=0.2195, pruned_loss=0.02791, over 18421.00 frames. ], tot_loss[loss=0.158, simple_loss=0.2479, pruned_loss=0.03402, over 3587580.07 frames. ], batch size: 42, lr: 4.47e-03, grad_scale: 8.0 2023-03-09 19:16:15,967 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90030.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:16:22,597 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90036.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:16:39,750 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.46 vs. limit=2.0 2023-03-09 19:16:57,967 INFO [train.py:898] (3/4) Epoch 25, batch 2850, loss[loss=0.1837, simple_loss=0.2715, pruned_loss=0.04796, over 12706.00 frames. ], tot_loss[loss=0.1582, simple_loss=0.2484, pruned_loss=0.03404, over 3594407.28 frames. ], batch size: 130, lr: 4.47e-03, grad_scale: 8.0 2023-03-09 19:17:11,247 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=90078.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:17:12,895 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.03 vs. limit=5.0 2023-03-09 19:17:14,896 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90081.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:17:31,062 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90095.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 19:17:38,646 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.912e+02 2.603e+02 3.094e+02 3.729e+02 6.556e+02, threshold=6.188e+02, percent-clipped=2.0 2023-03-09 19:17:56,097 INFO [train.py:898] (3/4) Epoch 25, batch 2900, loss[loss=0.1783, simple_loss=0.2658, pruned_loss=0.04535, over 18532.00 frames. ], tot_loss[loss=0.1594, simple_loss=0.2498, pruned_loss=0.03456, over 3575614.33 frames. ], batch size: 49, lr: 4.47e-03, grad_scale: 8.0 2023-03-09 19:18:10,711 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=90129.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:18:12,022 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=90130.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:18:26,751 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=90143.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 19:18:52,953 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90165.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:18:54,884 INFO [train.py:898] (3/4) Epoch 25, batch 2950, loss[loss=0.1479, simple_loss=0.2394, pruned_loss=0.0282, over 18279.00 frames. ], tot_loss[loss=0.1589, simple_loss=0.2494, pruned_loss=0.03419, over 3587118.68 frames. ], batch size: 49, lr: 4.47e-03, grad_scale: 8.0 2023-03-09 19:18:59,860 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6051, 2.4202, 2.5355, 2.7977, 3.1384, 4.9256, 4.9334, 3.3556], device='cuda:3'), covar=tensor([0.2072, 0.2703, 0.3363, 0.1926, 0.2680, 0.0265, 0.0352, 0.1047], device='cuda:3'), in_proj_covar=tensor([0.0319, 0.0359, 0.0402, 0.0289, 0.0396, 0.0259, 0.0302, 0.0268], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 19:19:04,462 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.76 vs. limit=5.0 2023-03-09 19:19:23,409 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=90191.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:19:36,177 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.680e+02 2.539e+02 2.924e+02 3.398e+02 8.084e+02, threshold=5.847e+02, percent-clipped=3.0 2023-03-09 19:19:36,963 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-09 19:19:48,869 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=90213.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:19:53,268 INFO [train.py:898] (3/4) Epoch 25, batch 3000, loss[loss=0.1443, simple_loss=0.216, pruned_loss=0.03631, over 18437.00 frames. ], tot_loss[loss=0.1584, simple_loss=0.2487, pruned_loss=0.03404, over 3577698.86 frames. ], batch size: 42, lr: 4.47e-03, grad_scale: 8.0 2023-03-09 19:19:53,268 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 19:20:05,342 INFO [train.py:932] (3/4) Epoch 25, validation: loss=0.1501, simple_loss=0.2485, pruned_loss=0.02584, over 944034.00 frames. 2023-03-09 19:20:05,343 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 19:20:10,063 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90221.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 19:20:14,737 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9184, 4.9858, 5.0069, 4.7666, 4.7618, 4.7722, 5.1063, 5.1362], device='cuda:3'), covar=tensor([0.0069, 0.0063, 0.0065, 0.0108, 0.0062, 0.0170, 0.0066, 0.0084], device='cuda:3'), in_proj_covar=tensor([0.0099, 0.0073, 0.0078, 0.0098, 0.0078, 0.0108, 0.0090, 0.0089], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 19:20:22,573 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=90232.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:20:28,702 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8704, 4.5548, 4.5609, 3.3509, 3.7063, 3.6270, 2.6479, 2.6158], device='cuda:3'), covar=tensor([0.0210, 0.0139, 0.0081, 0.0330, 0.0347, 0.0197, 0.0731, 0.0804], device='cuda:3'), in_proj_covar=tensor([0.0074, 0.0063, 0.0068, 0.0071, 0.0093, 0.0070, 0.0080, 0.0086], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006], device='cuda:3') 2023-03-09 19:21:03,801 INFO [train.py:898] (3/4) Epoch 25, batch 3050, loss[loss=0.156, simple_loss=0.2469, pruned_loss=0.03259, over 18546.00 frames. ], tot_loss[loss=0.1581, simple_loss=0.2484, pruned_loss=0.03395, over 3579318.30 frames. ], batch size: 49, lr: 4.47e-03, grad_scale: 8.0 2023-03-09 19:21:34,121 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=90293.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:21:44,553 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.813e+02 2.699e+02 3.154e+02 3.887e+02 1.496e+03, threshold=6.309e+02, percent-clipped=8.0 2023-03-09 19:22:00,236 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.41 vs. limit=5.0 2023-03-09 19:22:02,766 INFO [train.py:898] (3/4) Epoch 25, batch 3100, loss[loss=0.1595, simple_loss=0.2594, pruned_loss=0.02982, over 18018.00 frames. ], tot_loss[loss=0.1577, simple_loss=0.2479, pruned_loss=0.03376, over 3581452.39 frames. ], batch size: 65, lr: 4.47e-03, grad_scale: 8.0 2023-03-09 19:22:24,183 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90336.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:22:42,242 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=90351.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:23:00,972 INFO [train.py:898] (3/4) Epoch 25, batch 3150, loss[loss=0.1841, simple_loss=0.2712, pruned_loss=0.04851, over 18464.00 frames. ], tot_loss[loss=0.1582, simple_loss=0.2484, pruned_loss=0.03405, over 3580192.03 frames. ], batch size: 59, lr: 4.46e-03, grad_scale: 4.0 2023-03-09 19:23:20,284 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=90384.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:23:42,938 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.817e+02 2.713e+02 3.340e+02 4.151e+02 5.769e+02, threshold=6.681e+02, percent-clipped=0.0 2023-03-09 19:23:53,085 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=90412.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:23:59,432 INFO [train.py:898] (3/4) Epoch 25, batch 3200, loss[loss=0.1558, simple_loss=0.2439, pruned_loss=0.03382, over 18566.00 frames. ], tot_loss[loss=0.1586, simple_loss=0.2488, pruned_loss=0.03418, over 3580977.83 frames. ], batch size: 49, lr: 4.46e-03, grad_scale: 8.0 2023-03-09 19:24:48,340 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=90459.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:24:57,502 INFO [train.py:898] (3/4) Epoch 25, batch 3250, loss[loss=0.1523, simple_loss=0.2422, pruned_loss=0.03121, over 18493.00 frames. ], tot_loss[loss=0.1584, simple_loss=0.2487, pruned_loss=0.03402, over 3578184.33 frames. ], batch size: 51, lr: 4.46e-03, grad_scale: 8.0 2023-03-09 19:25:19,196 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90486.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:25:31,159 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=90496.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:25:38,660 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.740e+02 2.752e+02 3.132e+02 3.659e+02 1.386e+03, threshold=6.263e+02, percent-clipped=2.0 2023-03-09 19:25:43,913 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7629, 2.4710, 2.7394, 2.7598, 3.3802, 4.9294, 4.8270, 3.2428], device='cuda:3'), covar=tensor([0.1943, 0.2517, 0.3039, 0.1934, 0.2328, 0.0260, 0.0372, 0.1136], device='cuda:3'), in_proj_covar=tensor([0.0317, 0.0357, 0.0400, 0.0286, 0.0393, 0.0258, 0.0299, 0.0267], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 19:25:54,842 INFO [train.py:898] (3/4) Epoch 25, batch 3300, loss[loss=0.1685, simple_loss=0.2609, pruned_loss=0.03805, over 17797.00 frames. ], tot_loss[loss=0.1583, simple_loss=0.2485, pruned_loss=0.03403, over 3574507.58 frames. ], batch size: 70, lr: 4.46e-03, grad_scale: 8.0 2023-03-09 19:25:59,773 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=90520.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:26:00,456 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-09 19:26:00,809 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90521.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 19:26:42,254 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=90557.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:26:52,899 INFO [train.py:898] (3/4) Epoch 25, batch 3350, loss[loss=0.2038, simple_loss=0.2894, pruned_loss=0.0591, over 12433.00 frames. ], tot_loss[loss=0.159, simple_loss=0.2494, pruned_loss=0.03433, over 3575471.51 frames. ], batch size: 131, lr: 4.46e-03, grad_scale: 8.0 2023-03-09 19:26:55,393 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=90569.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:27:01,679 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3189, 5.2715, 5.5524, 5.5586, 5.2055, 6.0736, 5.7968, 5.3285], device='cuda:3'), covar=tensor([0.1206, 0.0679, 0.0798, 0.0813, 0.1464, 0.0732, 0.0715, 0.1863], device='cuda:3'), in_proj_covar=tensor([0.0369, 0.0299, 0.0327, 0.0327, 0.0340, 0.0442, 0.0294, 0.0434], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 19:27:17,359 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90588.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:27:34,782 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.869e+02 2.603e+02 3.023e+02 3.526e+02 7.217e+02, threshold=6.047e+02, percent-clipped=2.0 2023-03-09 19:27:42,485 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6527, 2.4755, 2.7075, 2.8672, 3.2234, 5.0448, 4.9532, 3.4023], device='cuda:3'), covar=tensor([0.1997, 0.2479, 0.3160, 0.1823, 0.2602, 0.0225, 0.0321, 0.1054], device='cuda:3'), in_proj_covar=tensor([0.0317, 0.0355, 0.0399, 0.0286, 0.0393, 0.0257, 0.0297, 0.0267], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 19:27:50,785 INFO [train.py:898] (3/4) Epoch 25, batch 3400, loss[loss=0.1729, simple_loss=0.2639, pruned_loss=0.04093, over 18053.00 frames. ], tot_loss[loss=0.1586, simple_loss=0.249, pruned_loss=0.03411, over 3582944.52 frames. ], batch size: 65, lr: 4.46e-03, grad_scale: 8.0 2023-03-09 19:28:01,753 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-09 19:28:25,424 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.30 vs. limit=5.0 2023-03-09 19:28:49,660 INFO [train.py:898] (3/4) Epoch 25, batch 3450, loss[loss=0.1519, simple_loss=0.2371, pruned_loss=0.03332, over 18352.00 frames. ], tot_loss[loss=0.1582, simple_loss=0.2484, pruned_loss=0.03397, over 3597845.84 frames. ], batch size: 46, lr: 4.46e-03, grad_scale: 8.0 2023-03-09 19:29:09,426 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9248, 4.9396, 5.0327, 4.7220, 4.7498, 4.8321, 5.1118, 5.0932], device='cuda:3'), covar=tensor([0.0073, 0.0064, 0.0064, 0.0109, 0.0064, 0.0124, 0.0073, 0.0094], device='cuda:3'), in_proj_covar=tensor([0.0097, 0.0072, 0.0077, 0.0096, 0.0077, 0.0105, 0.0089, 0.0088], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 19:29:14,062 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7904, 3.0006, 4.4432, 3.6540, 2.6418, 4.6483, 3.8503, 2.9203], device='cuda:3'), covar=tensor([0.0459, 0.1337, 0.0238, 0.0499, 0.1562, 0.0254, 0.0632, 0.0893], device='cuda:3'), in_proj_covar=tensor([0.0214, 0.0241, 0.0225, 0.0168, 0.0225, 0.0215, 0.0254, 0.0196], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 19:29:31,661 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.836e+02 2.461e+02 2.925e+02 3.622e+02 7.224e+02, threshold=5.850e+02, percent-clipped=2.0 2023-03-09 19:29:37,283 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90707.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:29:48,680 INFO [train.py:898] (3/4) Epoch 25, batch 3500, loss[loss=0.1454, simple_loss=0.2392, pruned_loss=0.02586, over 18289.00 frames. ], tot_loss[loss=0.1573, simple_loss=0.2474, pruned_loss=0.03357, over 3595227.28 frames. ], batch size: 49, lr: 4.46e-03, grad_scale: 8.0 2023-03-09 19:30:44,126 INFO [train.py:898] (3/4) Epoch 25, batch 3550, loss[loss=0.1686, simple_loss=0.2603, pruned_loss=0.03845, over 17774.00 frames. ], tot_loss[loss=0.1569, simple_loss=0.2471, pruned_loss=0.03338, over 3593311.15 frames. ], batch size: 70, lr: 4.45e-03, grad_scale: 8.0 2023-03-09 19:31:04,254 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90786.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:31:22,422 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.855e+02 2.570e+02 3.030e+02 3.622e+02 8.486e+02, threshold=6.060e+02, percent-clipped=3.0 2023-03-09 19:31:35,572 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90815.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:31:36,164 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.51 vs. limit=5.0 2023-03-09 19:31:37,538 INFO [train.py:898] (3/4) Epoch 25, batch 3600, loss[loss=0.1496, simple_loss=0.245, pruned_loss=0.02714, over 18503.00 frames. ], tot_loss[loss=0.1576, simple_loss=0.2479, pruned_loss=0.03366, over 3589467.06 frames. ], batch size: 51, lr: 4.45e-03, grad_scale: 8.0 2023-03-09 19:31:56,451 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=90834.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:32:11,878 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=90849.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:32:40,257 INFO [train.py:898] (3/4) Epoch 26, batch 0, loss[loss=0.1354, simple_loss=0.2205, pruned_loss=0.02517, over 18484.00 frames. ], tot_loss[loss=0.1354, simple_loss=0.2205, pruned_loss=0.02517, over 18484.00 frames. ], batch size: 44, lr: 4.36e-03, grad_scale: 8.0 2023-03-09 19:32:40,258 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 19:32:52,307 INFO [train.py:932] (3/4) Epoch 26, validation: loss=0.1501, simple_loss=0.2487, pruned_loss=0.02573, over 944034.00 frames. 2023-03-09 19:32:52,308 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 19:32:53,491 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90852.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:33:35,332 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90888.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:33:49,741 INFO [train.py:898] (3/4) Epoch 26, batch 50, loss[loss=0.1501, simple_loss=0.2438, pruned_loss=0.02818, over 18499.00 frames. ], tot_loss[loss=0.1604, simple_loss=0.2518, pruned_loss=0.03451, over 807554.52 frames. ], batch size: 47, lr: 4.36e-03, grad_scale: 8.0 2023-03-09 19:33:52,049 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.759e+02 2.622e+02 3.196e+02 4.102e+02 7.514e+02, threshold=6.391e+02, percent-clipped=4.0 2023-03-09 19:34:01,622 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=90910.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:34:31,658 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=90936.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:34:38,756 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7895, 4.4744, 4.4772, 3.3478, 3.6723, 3.4522, 2.5798, 2.5649], device='cuda:3'), covar=tensor([0.0207, 0.0122, 0.0082, 0.0307, 0.0325, 0.0236, 0.0722, 0.0768], device='cuda:3'), in_proj_covar=tensor([0.0074, 0.0063, 0.0068, 0.0070, 0.0092, 0.0070, 0.0079, 0.0085], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 19:34:48,591 INFO [train.py:898] (3/4) Epoch 26, batch 100, loss[loss=0.1382, simple_loss=0.2202, pruned_loss=0.02809, over 18498.00 frames. ], tot_loss[loss=0.1588, simple_loss=0.2493, pruned_loss=0.03418, over 1435133.11 frames. ], batch size: 44, lr: 4.36e-03, grad_scale: 8.0 2023-03-09 19:35:44,196 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5519, 3.6852, 4.7719, 4.2454, 3.2896, 2.9213, 4.3487, 5.0180], device='cuda:3'), covar=tensor([0.0808, 0.1362, 0.0253, 0.0404, 0.0906, 0.1194, 0.0387, 0.0337], device='cuda:3'), in_proj_covar=tensor([0.0153, 0.0283, 0.0167, 0.0187, 0.0196, 0.0195, 0.0200, 0.0209], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 19:35:47,471 INFO [train.py:898] (3/4) Epoch 26, batch 150, loss[loss=0.1659, simple_loss=0.2559, pruned_loss=0.03798, over 18381.00 frames. ], tot_loss[loss=0.1592, simple_loss=0.2499, pruned_loss=0.03428, over 1903188.06 frames. ], batch size: 50, lr: 4.36e-03, grad_scale: 8.0 2023-03-09 19:35:49,740 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.912e+02 2.712e+02 3.182e+02 3.692e+02 5.749e+02, threshold=6.364e+02, percent-clipped=0.0 2023-03-09 19:35:54,575 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4584, 5.3373, 5.7293, 5.7761, 5.3084, 6.2078, 5.8695, 5.4887], device='cuda:3'), covar=tensor([0.0980, 0.0619, 0.0713, 0.0703, 0.1275, 0.0638, 0.0609, 0.1772], device='cuda:3'), in_proj_covar=tensor([0.0367, 0.0298, 0.0325, 0.0325, 0.0338, 0.0439, 0.0292, 0.0432], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 19:35:54,688 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=91007.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:36:46,107 INFO [train.py:898] (3/4) Epoch 26, batch 200, loss[loss=0.1564, simple_loss=0.2423, pruned_loss=0.03522, over 18553.00 frames. ], tot_loss[loss=0.159, simple_loss=0.25, pruned_loss=0.03399, over 2284439.09 frames. ], batch size: 49, lr: 4.36e-03, grad_scale: 8.0 2023-03-09 19:36:49,188 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-09 19:36:50,991 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=91055.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:37:31,624 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8333, 4.5320, 4.6134, 3.4703, 3.7878, 3.5641, 2.9094, 2.6989], device='cuda:3'), covar=tensor([0.0236, 0.0154, 0.0081, 0.0304, 0.0335, 0.0233, 0.0653, 0.0796], device='cuda:3'), in_proj_covar=tensor([0.0073, 0.0062, 0.0067, 0.0070, 0.0091, 0.0069, 0.0078, 0.0084], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 19:37:43,760 INFO [train.py:898] (3/4) Epoch 26, batch 250, loss[loss=0.1591, simple_loss=0.241, pruned_loss=0.03859, over 18156.00 frames. ], tot_loss[loss=0.1583, simple_loss=0.2492, pruned_loss=0.03373, over 2583724.16 frames. ], batch size: 44, lr: 4.36e-03, grad_scale: 8.0 2023-03-09 19:37:45,994 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.600e+02 2.590e+02 3.099e+02 3.572e+02 5.022e+02, threshold=6.197e+02, percent-clipped=0.0 2023-03-09 19:37:59,554 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=91115.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:38:35,966 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-09 19:38:41,415 INFO [train.py:898] (3/4) Epoch 26, batch 300, loss[loss=0.1569, simple_loss=0.2483, pruned_loss=0.03274, over 18526.00 frames. ], tot_loss[loss=0.1582, simple_loss=0.249, pruned_loss=0.03368, over 2813680.05 frames. ], batch size: 49, lr: 4.36e-03, grad_scale: 8.0 2023-03-09 19:38:42,819 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=91152.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:38:55,162 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=91163.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:39:07,362 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8920, 4.9822, 5.0146, 4.7251, 4.7192, 4.7359, 5.0800, 5.1176], device='cuda:3'), covar=tensor([0.0079, 0.0060, 0.0059, 0.0122, 0.0071, 0.0161, 0.0077, 0.0096], device='cuda:3'), in_proj_covar=tensor([0.0100, 0.0074, 0.0079, 0.0098, 0.0078, 0.0108, 0.0090, 0.0089], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 19:39:39,111 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=91200.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:39:40,113 INFO [train.py:898] (3/4) Epoch 26, batch 350, loss[loss=0.1298, simple_loss=0.2126, pruned_loss=0.02353, over 18413.00 frames. ], tot_loss[loss=0.1581, simple_loss=0.2487, pruned_loss=0.03373, over 2996243.02 frames. ], batch size: 42, lr: 4.36e-03, grad_scale: 8.0 2023-03-09 19:39:42,444 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.808e+02 2.557e+02 3.001e+02 3.591e+02 6.784e+02, threshold=6.003e+02, percent-clipped=1.0 2023-03-09 19:39:44,919 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=91205.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:40:15,354 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7247, 3.4913, 2.5711, 4.4582, 3.2041, 4.2739, 2.6476, 4.1854], device='cuda:3'), covar=tensor([0.0589, 0.0940, 0.1403, 0.0505, 0.0859, 0.0285, 0.1190, 0.0354], device='cuda:3'), in_proj_covar=tensor([0.0223, 0.0232, 0.0195, 0.0296, 0.0197, 0.0272, 0.0208, 0.0207], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 19:40:38,706 INFO [train.py:898] (3/4) Epoch 26, batch 400, loss[loss=0.1445, simple_loss=0.2342, pruned_loss=0.02738, over 18361.00 frames. ], tot_loss[loss=0.1576, simple_loss=0.2482, pruned_loss=0.03349, over 3128348.67 frames. ], batch size: 50, lr: 4.35e-03, grad_scale: 8.0 2023-03-09 19:41:18,922 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=91285.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:41:37,897 INFO [train.py:898] (3/4) Epoch 26, batch 450, loss[loss=0.1541, simple_loss=0.2319, pruned_loss=0.03817, over 18166.00 frames. ], tot_loss[loss=0.1574, simple_loss=0.2477, pruned_loss=0.03361, over 3230724.70 frames. ], batch size: 44, lr: 4.35e-03, grad_scale: 8.0 2023-03-09 19:41:40,037 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.596e+02 2.481e+02 2.873e+02 3.390e+02 6.237e+02, threshold=5.746e+02, percent-clipped=1.0 2023-03-09 19:41:51,787 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5542, 3.4089, 4.6342, 4.0528, 3.1623, 2.8667, 4.1500, 4.7640], device='cuda:3'), covar=tensor([0.0842, 0.1430, 0.0209, 0.0438, 0.0996, 0.1254, 0.0409, 0.0239], device='cuda:3'), in_proj_covar=tensor([0.0152, 0.0280, 0.0165, 0.0185, 0.0194, 0.0193, 0.0199, 0.0207], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 19:42:09,766 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.56 vs. limit=2.0 2023-03-09 19:42:10,658 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4021, 3.9407, 3.8689, 3.1146, 3.3596, 3.1039, 2.4884, 2.4731], device='cuda:3'), covar=tensor([0.0258, 0.0148, 0.0122, 0.0332, 0.0368, 0.0253, 0.0712, 0.0768], device='cuda:3'), in_proj_covar=tensor([0.0074, 0.0063, 0.0067, 0.0070, 0.0092, 0.0070, 0.0079, 0.0085], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 19:42:29,803 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=91346.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:42:35,737 INFO [train.py:898] (3/4) Epoch 26, batch 500, loss[loss=0.1515, simple_loss=0.2335, pruned_loss=0.03473, over 17723.00 frames. ], tot_loss[loss=0.1572, simple_loss=0.2475, pruned_loss=0.03344, over 3312574.27 frames. ], batch size: 39, lr: 4.35e-03, grad_scale: 8.0 2023-03-09 19:43:32,650 INFO [train.py:898] (3/4) Epoch 26, batch 550, loss[loss=0.2241, simple_loss=0.2956, pruned_loss=0.07628, over 12833.00 frames. ], tot_loss[loss=0.1585, simple_loss=0.2485, pruned_loss=0.03422, over 3373965.89 frames. ], batch size: 129, lr: 4.35e-03, grad_scale: 8.0 2023-03-09 19:43:35,350 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.676e+02 2.412e+02 2.843e+02 3.584e+02 8.267e+02, threshold=5.686e+02, percent-clipped=4.0 2023-03-09 19:43:52,920 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-09 19:44:30,276 INFO [train.py:898] (3/4) Epoch 26, batch 600, loss[loss=0.1558, simple_loss=0.2404, pruned_loss=0.03564, over 18368.00 frames. ], tot_loss[loss=0.1582, simple_loss=0.2483, pruned_loss=0.03408, over 3403173.35 frames. ], batch size: 46, lr: 4.35e-03, grad_scale: 8.0 2023-03-09 19:45:11,609 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0346, 5.4487, 2.8667, 5.3277, 5.2019, 5.4823, 5.3419, 3.0567], device='cuda:3'), covar=tensor([0.0210, 0.0066, 0.0748, 0.0065, 0.0064, 0.0065, 0.0074, 0.0816], device='cuda:3'), in_proj_covar=tensor([0.0091, 0.0083, 0.0097, 0.0098, 0.0089, 0.0079, 0.0086, 0.0098], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 19:45:29,312 INFO [train.py:898] (3/4) Epoch 26, batch 650, loss[loss=0.1392, simple_loss=0.2197, pruned_loss=0.02932, over 17595.00 frames. ], tot_loss[loss=0.1574, simple_loss=0.2472, pruned_loss=0.03379, over 3438106.53 frames. ], batch size: 39, lr: 4.35e-03, grad_scale: 8.0 2023-03-09 19:45:32,529 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.747e+02 2.481e+02 2.888e+02 3.491e+02 6.758e+02, threshold=5.775e+02, percent-clipped=2.0 2023-03-09 19:45:33,434 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-09 19:45:34,566 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=91505.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:46:16,609 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=91542.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 19:46:27,363 INFO [train.py:898] (3/4) Epoch 26, batch 700, loss[loss=0.1445, simple_loss=0.2334, pruned_loss=0.02778, over 18298.00 frames. ], tot_loss[loss=0.1576, simple_loss=0.2473, pruned_loss=0.03394, over 3472662.51 frames. ], batch size: 49, lr: 4.35e-03, grad_scale: 4.0 2023-03-09 19:46:29,758 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=91553.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:46:43,417 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2954, 5.8389, 5.4374, 5.6440, 5.4637, 5.2727, 5.9399, 5.8746], device='cuda:3'), covar=tensor([0.1190, 0.0818, 0.0551, 0.0755, 0.1431, 0.0764, 0.0617, 0.0744], device='cuda:3'), in_proj_covar=tensor([0.0628, 0.0556, 0.0404, 0.0579, 0.0777, 0.0575, 0.0791, 0.0603], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 19:47:26,236 INFO [train.py:898] (3/4) Epoch 26, batch 750, loss[loss=0.1771, simple_loss=0.2717, pruned_loss=0.04124, over 17599.00 frames. ], tot_loss[loss=0.1582, simple_loss=0.2479, pruned_loss=0.03424, over 3490875.35 frames. ], batch size: 70, lr: 4.35e-03, grad_scale: 4.0 2023-03-09 19:47:29,391 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=91603.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 19:47:30,099 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.568e+02 2.592e+02 2.993e+02 3.569e+02 6.310e+02, threshold=5.987e+02, percent-clipped=2.0 2023-03-09 19:48:12,597 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=91641.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:48:23,823 INFO [train.py:898] (3/4) Epoch 26, batch 800, loss[loss=0.1412, simple_loss=0.238, pruned_loss=0.02223, over 18460.00 frames. ], tot_loss[loss=0.1575, simple_loss=0.2473, pruned_loss=0.03381, over 3527779.42 frames. ], batch size: 53, lr: 4.35e-03, grad_scale: 8.0 2023-03-09 19:48:34,568 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8402, 3.7765, 5.1858, 2.9659, 4.5218, 2.6292, 3.2645, 1.7633], device='cuda:3'), covar=tensor([0.1278, 0.0936, 0.0162, 0.0957, 0.0491, 0.2562, 0.2505, 0.2280], device='cuda:3'), in_proj_covar=tensor([0.0228, 0.0250, 0.0218, 0.0206, 0.0263, 0.0277, 0.0335, 0.0242], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 19:49:22,522 INFO [train.py:898] (3/4) Epoch 26, batch 850, loss[loss=0.1503, simple_loss=0.2401, pruned_loss=0.03029, over 18557.00 frames. ], tot_loss[loss=0.1573, simple_loss=0.2474, pruned_loss=0.03361, over 3538495.13 frames. ], batch size: 54, lr: 4.34e-03, grad_scale: 8.0 2023-03-09 19:49:25,715 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.771e+02 2.531e+02 3.031e+02 3.445e+02 6.326e+02, threshold=6.062e+02, percent-clipped=1.0 2023-03-09 19:50:20,636 INFO [train.py:898] (3/4) Epoch 26, batch 900, loss[loss=0.1574, simple_loss=0.2507, pruned_loss=0.03203, over 18266.00 frames. ], tot_loss[loss=0.1574, simple_loss=0.2478, pruned_loss=0.03355, over 3544183.75 frames. ], batch size: 57, lr: 4.34e-03, grad_scale: 8.0 2023-03-09 19:51:14,106 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-09 19:51:16,415 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-09 19:51:19,511 INFO [train.py:898] (3/4) Epoch 26, batch 950, loss[loss=0.1476, simple_loss=0.2441, pruned_loss=0.02557, over 18404.00 frames. ], tot_loss[loss=0.1571, simple_loss=0.2477, pruned_loss=0.03327, over 3545016.95 frames. ], batch size: 52, lr: 4.34e-03, grad_scale: 8.0 2023-03-09 19:51:22,550 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.812e+02 2.439e+02 3.074e+02 3.438e+02 1.468e+03, threshold=6.149e+02, percent-clipped=4.0 2023-03-09 19:52:17,351 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4062, 5.3078, 5.6700, 5.7084, 5.2550, 6.2328, 5.8587, 5.4195], device='cuda:3'), covar=tensor([0.1115, 0.0654, 0.0694, 0.0760, 0.1499, 0.0637, 0.0705, 0.1693], device='cuda:3'), in_proj_covar=tensor([0.0363, 0.0296, 0.0322, 0.0325, 0.0337, 0.0435, 0.0291, 0.0427], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 19:52:18,266 INFO [train.py:898] (3/4) Epoch 26, batch 1000, loss[loss=0.1306, simple_loss=0.2127, pruned_loss=0.02427, over 18518.00 frames. ], tot_loss[loss=0.1566, simple_loss=0.2468, pruned_loss=0.03317, over 3549288.11 frames. ], batch size: 44, lr: 4.34e-03, grad_scale: 8.0 2023-03-09 19:53:05,570 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=91891.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:53:13,173 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=91898.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 19:53:16,256 INFO [train.py:898] (3/4) Epoch 26, batch 1050, loss[loss=0.1648, simple_loss=0.2575, pruned_loss=0.03609, over 18372.00 frames. ], tot_loss[loss=0.1573, simple_loss=0.248, pruned_loss=0.03334, over 3568784.79 frames. ], batch size: 56, lr: 4.34e-03, grad_scale: 8.0 2023-03-09 19:53:19,766 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.941e+02 2.632e+02 3.014e+02 3.529e+02 5.169e+02, threshold=6.027e+02, percent-clipped=0.0 2023-03-09 19:53:38,743 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-09 19:54:03,916 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=91941.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:54:14,902 INFO [train.py:898] (3/4) Epoch 26, batch 1100, loss[loss=0.1381, simple_loss=0.2191, pruned_loss=0.02854, over 17774.00 frames. ], tot_loss[loss=0.1576, simple_loss=0.248, pruned_loss=0.03357, over 3573387.92 frames. ], batch size: 39, lr: 4.34e-03, grad_scale: 8.0 2023-03-09 19:54:16,437 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=91952.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:54:19,694 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=91955.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:54:50,811 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=91981.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 19:54:59,809 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=91989.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:55:17,961 INFO [train.py:898] (3/4) Epoch 26, batch 1150, loss[loss=0.1701, simple_loss=0.269, pruned_loss=0.0356, over 16338.00 frames. ], tot_loss[loss=0.1579, simple_loss=0.2482, pruned_loss=0.03381, over 3580038.28 frames. ], batch size: 94, lr: 4.34e-03, grad_scale: 8.0 2023-03-09 19:55:21,345 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.947e+02 2.558e+02 3.039e+02 3.805e+02 7.483e+02, threshold=6.077e+02, percent-clipped=1.0 2023-03-09 19:55:34,983 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92016.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 19:55:44,343 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-09 19:56:06,573 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92042.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 19:56:16,299 INFO [train.py:898] (3/4) Epoch 26, batch 1200, loss[loss=0.1564, simple_loss=0.2514, pruned_loss=0.03075, over 18457.00 frames. ], tot_loss[loss=0.158, simple_loss=0.2486, pruned_loss=0.03371, over 3569767.69 frames. ], batch size: 59, lr: 4.34e-03, grad_scale: 8.0 2023-03-09 19:56:50,597 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8506, 5.1162, 2.3388, 4.9945, 4.8721, 5.1526, 4.9292, 2.4112], device='cuda:3'), covar=tensor([0.0250, 0.0060, 0.0922, 0.0086, 0.0071, 0.0068, 0.0095, 0.1157], device='cuda:3'), in_proj_covar=tensor([0.0092, 0.0084, 0.0098, 0.0099, 0.0089, 0.0080, 0.0087, 0.0099], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 19:57:14,326 INFO [train.py:898] (3/4) Epoch 26, batch 1250, loss[loss=0.1448, simple_loss=0.2256, pruned_loss=0.03204, over 18393.00 frames. ], tot_loss[loss=0.1582, simple_loss=0.2488, pruned_loss=0.03379, over 3577727.57 frames. ], batch size: 42, lr: 4.33e-03, grad_scale: 8.0 2023-03-09 19:57:17,636 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.939e+02 2.685e+02 3.194e+02 3.689e+02 7.845e+02, threshold=6.387e+02, percent-clipped=2.0 2023-03-09 19:57:25,817 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7100, 3.5992, 2.4402, 4.5648, 3.2987, 4.3911, 2.6608, 4.1447], device='cuda:3'), covar=tensor([0.0622, 0.0888, 0.1429, 0.0461, 0.0804, 0.0383, 0.1230, 0.0401], device='cuda:3'), in_proj_covar=tensor([0.0225, 0.0233, 0.0196, 0.0297, 0.0197, 0.0273, 0.0209, 0.0208], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 19:58:04,129 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2447, 5.2006, 4.8888, 5.1597, 5.1584, 4.5672, 5.0829, 4.8202], device='cuda:3'), covar=tensor([0.0469, 0.0512, 0.1446, 0.0825, 0.0618, 0.0435, 0.0495, 0.1155], device='cuda:3'), in_proj_covar=tensor([0.0510, 0.0582, 0.0716, 0.0449, 0.0475, 0.0521, 0.0558, 0.0690], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 19:58:12,743 INFO [train.py:898] (3/4) Epoch 26, batch 1300, loss[loss=0.1378, simple_loss=0.2197, pruned_loss=0.02797, over 18449.00 frames. ], tot_loss[loss=0.1585, simple_loss=0.2492, pruned_loss=0.0339, over 3586184.62 frames. ], batch size: 43, lr: 4.33e-03, grad_scale: 8.0 2023-03-09 19:58:25,361 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8420, 3.5777, 5.0641, 2.8962, 4.4018, 2.5430, 3.0444, 1.5933], device='cuda:3'), covar=tensor([0.1289, 0.1060, 0.0145, 0.1055, 0.0475, 0.2636, 0.2749, 0.2464], device='cuda:3'), in_proj_covar=tensor([0.0229, 0.0253, 0.0221, 0.0207, 0.0266, 0.0278, 0.0339, 0.0244], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 19:58:42,809 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5813, 3.3728, 2.0161, 4.3181, 3.0574, 3.7790, 2.2206, 3.6368], device='cuda:3'), covar=tensor([0.0582, 0.0796, 0.1575, 0.0511, 0.0783, 0.0370, 0.1357, 0.0525], device='cuda:3'), in_proj_covar=tensor([0.0223, 0.0231, 0.0195, 0.0295, 0.0196, 0.0271, 0.0207, 0.0206], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 19:58:46,222 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6903, 2.3108, 2.6458, 2.7496, 3.1599, 4.9121, 4.9077, 3.2928], device='cuda:3'), covar=tensor([0.2002, 0.2647, 0.3159, 0.1960, 0.2706, 0.0253, 0.0322, 0.1110], device='cuda:3'), in_proj_covar=tensor([0.0321, 0.0358, 0.0403, 0.0288, 0.0394, 0.0259, 0.0301, 0.0268], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 19:59:07,903 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=92198.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 19:59:11,350 INFO [train.py:898] (3/4) Epoch 26, batch 1350, loss[loss=0.1322, simple_loss=0.2107, pruned_loss=0.0269, over 18395.00 frames. ], tot_loss[loss=0.1584, simple_loss=0.2491, pruned_loss=0.03389, over 3584728.29 frames. ], batch size: 42, lr: 4.33e-03, grad_scale: 8.0 2023-03-09 19:59:14,649 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.584e+02 2.419e+02 2.885e+02 3.522e+02 6.118e+02, threshold=5.770e+02, percent-clipped=0.0 2023-03-09 19:59:26,277 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9400, 3.8395, 5.0918, 4.6949, 3.5807, 3.2105, 4.5865, 5.3870], device='cuda:3'), covar=tensor([0.0751, 0.1436, 0.0224, 0.0364, 0.0867, 0.1137, 0.0377, 0.0221], device='cuda:3'), in_proj_covar=tensor([0.0153, 0.0283, 0.0168, 0.0187, 0.0197, 0.0196, 0.0201, 0.0209], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 19:59:31,656 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6839, 3.6189, 2.3173, 4.4661, 3.1674, 4.3217, 2.7399, 4.0954], device='cuda:3'), covar=tensor([0.0647, 0.0857, 0.1459, 0.0488, 0.0870, 0.0336, 0.1086, 0.0387], device='cuda:3'), in_proj_covar=tensor([0.0225, 0.0232, 0.0196, 0.0297, 0.0197, 0.0272, 0.0208, 0.0207], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 20:00:03,380 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=92246.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 20:00:05,022 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=92247.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:00:09,203 INFO [train.py:898] (3/4) Epoch 26, batch 1400, loss[loss=0.1566, simple_loss=0.2546, pruned_loss=0.02931, over 18573.00 frames. ], tot_loss[loss=0.1582, simple_loss=0.2488, pruned_loss=0.03385, over 3583218.18 frames. ], batch size: 54, lr: 4.33e-03, grad_scale: 8.0 2023-03-09 20:00:21,841 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6134, 2.9930, 4.2695, 3.6014, 2.7718, 4.4873, 3.8082, 2.8811], device='cuda:3'), covar=tensor([0.0548, 0.1416, 0.0350, 0.0541, 0.1607, 0.0252, 0.0626, 0.1015], device='cuda:3'), in_proj_covar=tensor([0.0219, 0.0246, 0.0231, 0.0172, 0.0228, 0.0221, 0.0259, 0.0201], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 20:01:06,844 INFO [train.py:898] (3/4) Epoch 26, batch 1450, loss[loss=0.1485, simple_loss=0.2366, pruned_loss=0.03014, over 18504.00 frames. ], tot_loss[loss=0.158, simple_loss=0.2484, pruned_loss=0.03381, over 3594537.58 frames. ], batch size: 47, lr: 4.33e-03, grad_scale: 8.0 2023-03-09 20:01:10,175 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.801e+02 2.582e+02 3.137e+02 3.833e+02 9.171e+02, threshold=6.274e+02, percent-clipped=8.0 2023-03-09 20:01:14,665 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6267, 4.0471, 2.3078, 3.9804, 5.0310, 2.4992, 3.7796, 3.9245], device='cuda:3'), covar=tensor([0.0246, 0.1056, 0.1672, 0.0629, 0.0103, 0.1228, 0.0682, 0.0653], device='cuda:3'), in_proj_covar=tensor([0.0180, 0.0279, 0.0207, 0.0201, 0.0138, 0.0186, 0.0222, 0.0228], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 20:01:19,061 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=92311.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:01:48,307 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=92337.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 20:02:04,836 INFO [train.py:898] (3/4) Epoch 26, batch 1500, loss[loss=0.1839, simple_loss=0.2751, pruned_loss=0.04639, over 18342.00 frames. ], tot_loss[loss=0.1577, simple_loss=0.2482, pruned_loss=0.0336, over 3591418.53 frames. ], batch size: 56, lr: 4.33e-03, grad_scale: 8.0 2023-03-09 20:03:03,825 INFO [train.py:898] (3/4) Epoch 26, batch 1550, loss[loss=0.1639, simple_loss=0.2618, pruned_loss=0.03301, over 18586.00 frames. ], tot_loss[loss=0.1574, simple_loss=0.248, pruned_loss=0.0334, over 3599162.40 frames. ], batch size: 54, lr: 4.33e-03, grad_scale: 8.0 2023-03-09 20:03:07,217 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.867e+02 2.584e+02 2.933e+02 3.549e+02 6.992e+02, threshold=5.866e+02, percent-clipped=1.0 2023-03-09 20:04:01,775 INFO [train.py:898] (3/4) Epoch 26, batch 1600, loss[loss=0.1459, simple_loss=0.244, pruned_loss=0.02393, over 18374.00 frames. ], tot_loss[loss=0.1573, simple_loss=0.2478, pruned_loss=0.03337, over 3583459.02 frames. ], batch size: 50, lr: 4.33e-03, grad_scale: 8.0 2023-03-09 20:04:33,958 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5526, 2.8681, 4.2198, 3.6548, 2.6689, 4.4205, 3.8647, 2.7265], device='cuda:3'), covar=tensor([0.0502, 0.1422, 0.0328, 0.0469, 0.1620, 0.0241, 0.0565, 0.1082], device='cuda:3'), in_proj_covar=tensor([0.0219, 0.0244, 0.0229, 0.0171, 0.0226, 0.0219, 0.0256, 0.0199], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 20:04:38,736 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.82 vs. limit=2.0 2023-03-09 20:04:59,870 INFO [train.py:898] (3/4) Epoch 26, batch 1650, loss[loss=0.1585, simple_loss=0.2493, pruned_loss=0.03384, over 18373.00 frames. ], tot_loss[loss=0.1571, simple_loss=0.2475, pruned_loss=0.03334, over 3589916.69 frames. ], batch size: 50, lr: 4.33e-03, grad_scale: 8.0 2023-03-09 20:05:02,960 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.649e+02 2.477e+02 2.880e+02 3.587e+02 5.533e+02, threshold=5.760e+02, percent-clipped=0.0 2023-03-09 20:05:53,444 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=92547.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:05:58,161 INFO [train.py:898] (3/4) Epoch 26, batch 1700, loss[loss=0.1406, simple_loss=0.2284, pruned_loss=0.02639, over 18519.00 frames. ], tot_loss[loss=0.1568, simple_loss=0.2475, pruned_loss=0.03309, over 3586538.97 frames. ], batch size: 47, lr: 4.32e-03, grad_scale: 8.0 2023-03-09 20:06:48,875 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=92595.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:06:55,616 INFO [train.py:898] (3/4) Epoch 26, batch 1750, loss[loss=0.1745, simple_loss=0.2594, pruned_loss=0.04476, over 18348.00 frames. ], tot_loss[loss=0.1567, simple_loss=0.2474, pruned_loss=0.03302, over 3588772.29 frames. ], batch size: 56, lr: 4.32e-03, grad_scale: 8.0 2023-03-09 20:06:59,646 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.605e+02 2.546e+02 2.994e+02 3.589e+02 6.308e+02, threshold=5.987e+02, percent-clipped=1.0 2023-03-09 20:07:00,146 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0657, 3.7069, 5.0933, 3.0439, 4.4958, 2.6186, 3.1132, 1.8095], device='cuda:3'), covar=tensor([0.1104, 0.0967, 0.0154, 0.0965, 0.0476, 0.2647, 0.2868, 0.2427], device='cuda:3'), in_proj_covar=tensor([0.0228, 0.0251, 0.0219, 0.0206, 0.0266, 0.0277, 0.0336, 0.0244], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 20:07:08,066 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=92611.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:07:25,056 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=92625.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:07:38,500 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=92637.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 20:07:54,320 INFO [train.py:898] (3/4) Epoch 26, batch 1800, loss[loss=0.1524, simple_loss=0.2436, pruned_loss=0.03061, over 18358.00 frames. ], tot_loss[loss=0.1574, simple_loss=0.2482, pruned_loss=0.03335, over 3596437.53 frames. ], batch size: 56, lr: 4.32e-03, grad_scale: 8.0 2023-03-09 20:08:04,028 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=92659.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:08:35,068 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=92685.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 20:08:36,187 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92686.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:08:52,537 INFO [train.py:898] (3/4) Epoch 26, batch 1850, loss[loss=0.1416, simple_loss=0.2301, pruned_loss=0.02653, over 18265.00 frames. ], tot_loss[loss=0.1573, simple_loss=0.248, pruned_loss=0.03328, over 3602791.10 frames. ], batch size: 47, lr: 4.32e-03, grad_scale: 8.0 2023-03-09 20:08:55,755 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.027e+02 2.840e+02 3.330e+02 3.901e+02 1.111e+03, threshold=6.660e+02, percent-clipped=3.0 2023-03-09 20:09:16,122 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=92720.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:09:51,313 INFO [train.py:898] (3/4) Epoch 26, batch 1900, loss[loss=0.1522, simple_loss=0.2457, pruned_loss=0.02941, over 18299.00 frames. ], tot_loss[loss=0.1576, simple_loss=0.2483, pruned_loss=0.03341, over 3600135.79 frames. ], batch size: 54, lr: 4.32e-03, grad_scale: 8.0 2023-03-09 20:10:28,282 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92781.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:10:50,691 INFO [train.py:898] (3/4) Epoch 26, batch 1950, loss[loss=0.1636, simple_loss=0.2573, pruned_loss=0.03496, over 16030.00 frames. ], tot_loss[loss=0.157, simple_loss=0.2479, pruned_loss=0.0331, over 3596141.54 frames. ], batch size: 94, lr: 4.32e-03, grad_scale: 8.0 2023-03-09 20:10:54,088 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.627e+02 2.403e+02 2.804e+02 3.512e+02 6.323e+02, threshold=5.608e+02, percent-clipped=0.0 2023-03-09 20:11:26,621 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=92831.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:11:49,093 INFO [train.py:898] (3/4) Epoch 26, batch 2000, loss[loss=0.1527, simple_loss=0.2434, pruned_loss=0.03105, over 18272.00 frames. ], tot_loss[loss=0.1571, simple_loss=0.248, pruned_loss=0.03313, over 3594520.20 frames. ], batch size: 49, lr: 4.32e-03, grad_scale: 8.0 2023-03-09 20:12:38,409 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92892.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:12:45,967 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=92899.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:12:47,833 INFO [train.py:898] (3/4) Epoch 26, batch 2050, loss[loss=0.135, simple_loss=0.2208, pruned_loss=0.02456, over 17669.00 frames. ], tot_loss[loss=0.1572, simple_loss=0.2482, pruned_loss=0.0331, over 3597453.15 frames. ], batch size: 39, lr: 4.32e-03, grad_scale: 8.0 2023-03-09 20:12:51,217 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.874e+02 2.574e+02 3.005e+02 3.405e+02 1.130e+03, threshold=6.011e+02, percent-clipped=2.0 2023-03-09 20:13:23,162 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=92931.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:13:45,950 INFO [train.py:898] (3/4) Epoch 26, batch 2100, loss[loss=0.1848, simple_loss=0.275, pruned_loss=0.04729, over 18301.00 frames. ], tot_loss[loss=0.1579, simple_loss=0.2488, pruned_loss=0.0335, over 3587760.36 frames. ], batch size: 57, lr: 4.31e-03, grad_scale: 8.0 2023-03-09 20:13:56,558 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92960.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:14:14,070 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=92975.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:14:21,753 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=92981.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:14:34,803 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92992.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:14:44,671 INFO [train.py:898] (3/4) Epoch 26, batch 2150, loss[loss=0.1576, simple_loss=0.2572, pruned_loss=0.02898, over 18630.00 frames. ], tot_loss[loss=0.1577, simple_loss=0.2486, pruned_loss=0.03347, over 3596054.65 frames. ], batch size: 52, lr: 4.31e-03, grad_scale: 8.0 2023-03-09 20:14:48,054 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.894e+02 2.711e+02 3.209e+02 3.707e+02 5.473e+02, threshold=6.417e+02, percent-clipped=0.0 2023-03-09 20:14:55,187 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=93010.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:15:26,106 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=93036.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:15:26,149 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=93036.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:15:43,268 INFO [train.py:898] (3/4) Epoch 26, batch 2200, loss[loss=0.1439, simple_loss=0.2259, pruned_loss=0.03097, over 18167.00 frames. ], tot_loss[loss=0.1575, simple_loss=0.2485, pruned_loss=0.03329, over 3607412.27 frames. ], batch size: 44, lr: 4.31e-03, grad_scale: 8.0 2023-03-09 20:15:48,166 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8343, 4.8966, 4.9014, 4.6614, 4.6714, 4.7027, 4.9801, 4.9706], device='cuda:3'), covar=tensor([0.0071, 0.0060, 0.0062, 0.0113, 0.0068, 0.0150, 0.0075, 0.0096], device='cuda:3'), in_proj_covar=tensor([0.0100, 0.0074, 0.0079, 0.0098, 0.0079, 0.0108, 0.0091, 0.0090], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 20:16:06,487 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=93071.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:16:12,804 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93076.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:16:15,351 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.76 vs. limit=2.0 2023-03-09 20:16:38,282 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=93097.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 20:16:42,358 INFO [train.py:898] (3/4) Epoch 26, batch 2250, loss[loss=0.1548, simple_loss=0.2518, pruned_loss=0.02891, over 18552.00 frames. ], tot_loss[loss=0.1571, simple_loss=0.2478, pruned_loss=0.03319, over 3600905.08 frames. ], batch size: 54, lr: 4.31e-03, grad_scale: 8.0 2023-03-09 20:16:45,629 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.732e+02 2.458e+02 2.849e+02 3.441e+02 5.512e+02, threshold=5.698e+02, percent-clipped=0.0 2023-03-09 20:16:58,290 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=93115.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:17:20,373 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9016, 4.0870, 2.5424, 4.1587, 5.1861, 2.6445, 3.8717, 3.9518], device='cuda:3'), covar=tensor([0.0206, 0.1191, 0.1606, 0.0584, 0.0100, 0.1191, 0.0665, 0.0723], device='cuda:3'), in_proj_covar=tensor([0.0179, 0.0276, 0.0206, 0.0199, 0.0138, 0.0186, 0.0221, 0.0228], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 20:17:41,073 INFO [train.py:898] (3/4) Epoch 26, batch 2300, loss[loss=0.1899, simple_loss=0.2728, pruned_loss=0.05355, over 12518.00 frames. ], tot_loss[loss=0.1583, simple_loss=0.249, pruned_loss=0.03383, over 3596474.65 frames. ], batch size: 131, lr: 4.31e-03, grad_scale: 8.0 2023-03-09 20:18:09,642 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=93176.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:18:22,419 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93187.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:18:33,180 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8429, 5.0582, 5.0275, 5.0718, 4.8211, 5.6267, 5.2640, 4.8878], device='cuda:3'), covar=tensor([0.1192, 0.0825, 0.0851, 0.0868, 0.1456, 0.0778, 0.0776, 0.1736], device='cuda:3'), in_proj_covar=tensor([0.0369, 0.0304, 0.0326, 0.0330, 0.0343, 0.0441, 0.0296, 0.0432], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 20:18:40,066 INFO [train.py:898] (3/4) Epoch 26, batch 2350, loss[loss=0.1619, simple_loss=0.2624, pruned_loss=0.03072, over 18634.00 frames. ], tot_loss[loss=0.1585, simple_loss=0.2493, pruned_loss=0.03383, over 3592347.25 frames. ], batch size: 52, lr: 4.31e-03, grad_scale: 8.0 2023-03-09 20:18:43,321 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.656e+02 2.468e+02 2.905e+02 3.361e+02 5.906e+02, threshold=5.811e+02, percent-clipped=1.0 2023-03-09 20:19:38,040 INFO [train.py:898] (3/4) Epoch 26, batch 2400, loss[loss=0.1289, simple_loss=0.216, pruned_loss=0.02091, over 18584.00 frames. ], tot_loss[loss=0.1586, simple_loss=0.2493, pruned_loss=0.03398, over 3586470.73 frames. ], batch size: 45, lr: 4.31e-03, grad_scale: 8.0 2023-03-09 20:19:42,693 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93255.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:19:54,527 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.42 vs. limit=5.0 2023-03-09 20:19:55,311 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6487, 2.3449, 2.5725, 2.6278, 3.2309, 4.7612, 4.5838, 3.2133], device='cuda:3'), covar=tensor([0.2086, 0.2744, 0.3117, 0.2070, 0.2558, 0.0295, 0.0443, 0.1174], device='cuda:3'), in_proj_covar=tensor([0.0324, 0.0361, 0.0404, 0.0287, 0.0395, 0.0262, 0.0301, 0.0270], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 20:20:11,792 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93281.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:20:19,138 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93287.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:20:34,712 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.99 vs. limit=5.0 2023-03-09 20:20:36,267 INFO [train.py:898] (3/4) Epoch 26, batch 2450, loss[loss=0.1556, simple_loss=0.2497, pruned_loss=0.03072, over 18303.00 frames. ], tot_loss[loss=0.1587, simple_loss=0.2496, pruned_loss=0.03386, over 3591230.71 frames. ], batch size: 57, lr: 4.31e-03, grad_scale: 8.0 2023-03-09 20:20:39,773 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.978e+02 2.528e+02 2.957e+02 3.512e+02 5.428e+02, threshold=5.913e+02, percent-clipped=0.0 2023-03-09 20:20:54,155 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.14 vs. limit=5.0 2023-03-09 20:21:08,368 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=93329.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:21:10,680 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93331.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:21:11,893 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7570, 4.7311, 4.4262, 4.6743, 4.7061, 4.1444, 4.6074, 4.4304], device='cuda:3'), covar=tensor([0.0503, 0.0547, 0.1510, 0.0772, 0.0551, 0.0464, 0.0523, 0.1062], device='cuda:3'), in_proj_covar=tensor([0.0513, 0.0578, 0.0724, 0.0450, 0.0473, 0.0523, 0.0561, 0.0695], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 20:21:33,902 INFO [train.py:898] (3/4) Epoch 26, batch 2500, loss[loss=0.141, simple_loss=0.2317, pruned_loss=0.02517, over 18239.00 frames. ], tot_loss[loss=0.1582, simple_loss=0.2494, pruned_loss=0.03353, over 3600178.94 frames. ], batch size: 45, lr: 4.31e-03, grad_scale: 8.0 2023-03-09 20:21:52,406 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93366.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:22:00,237 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.2219, 4.1671, 3.9772, 4.1520, 4.1948, 3.7069, 4.1301, 4.0019], device='cuda:3'), covar=tensor([0.0567, 0.0785, 0.1382, 0.0787, 0.0629, 0.0499, 0.0553, 0.1057], device='cuda:3'), in_proj_covar=tensor([0.0514, 0.0579, 0.0724, 0.0451, 0.0473, 0.0525, 0.0563, 0.0696], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 20:22:03,723 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93376.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:22:22,164 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93392.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 20:22:32,982 INFO [train.py:898] (3/4) Epoch 26, batch 2550, loss[loss=0.1409, simple_loss=0.2243, pruned_loss=0.02876, over 18555.00 frames. ], tot_loss[loss=0.1584, simple_loss=0.2494, pruned_loss=0.03374, over 3566994.99 frames. ], batch size: 45, lr: 4.30e-03, grad_scale: 8.0 2023-03-09 20:22:36,879 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.774e+02 2.484e+02 3.027e+02 3.757e+02 6.388e+02, threshold=6.054e+02, percent-clipped=1.0 2023-03-09 20:23:00,435 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=93424.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:23:31,989 INFO [train.py:898] (3/4) Epoch 26, batch 2600, loss[loss=0.1563, simple_loss=0.2421, pruned_loss=0.03522, over 18544.00 frames. ], tot_loss[loss=0.1585, simple_loss=0.2494, pruned_loss=0.03376, over 3570980.77 frames. ], batch size: 49, lr: 4.30e-03, grad_scale: 8.0 2023-03-09 20:23:42,768 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9313, 3.6866, 5.0276, 2.7959, 4.4598, 2.5789, 2.9521, 1.8525], device='cuda:3'), covar=tensor([0.1166, 0.0969, 0.0187, 0.1032, 0.0448, 0.2649, 0.2756, 0.2226], device='cuda:3'), in_proj_covar=tensor([0.0226, 0.0248, 0.0218, 0.0204, 0.0263, 0.0274, 0.0333, 0.0241], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 20:23:55,940 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93471.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:23:57,599 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-09 20:24:14,466 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93487.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:24:30,106 INFO [train.py:898] (3/4) Epoch 26, batch 2650, loss[loss=0.165, simple_loss=0.2577, pruned_loss=0.03618, over 17779.00 frames. ], tot_loss[loss=0.158, simple_loss=0.2487, pruned_loss=0.03362, over 3566985.14 frames. ], batch size: 70, lr: 4.30e-03, grad_scale: 8.0 2023-03-09 20:24:34,072 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.541e+02 2.517e+02 2.981e+02 3.680e+02 1.070e+03, threshold=5.961e+02, percent-clipped=2.0 2023-03-09 20:25:09,834 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=93535.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:25:27,976 INFO [train.py:898] (3/4) Epoch 26, batch 2700, loss[loss=0.1598, simple_loss=0.2554, pruned_loss=0.03211, over 17269.00 frames. ], tot_loss[loss=0.1579, simple_loss=0.2487, pruned_loss=0.03354, over 3579018.11 frames. ], batch size: 78, lr: 4.30e-03, grad_scale: 16.0 2023-03-09 20:25:32,742 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93555.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:25:38,203 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9406, 3.7097, 4.9990, 4.4294, 3.3657, 3.0998, 4.3826, 5.2377], device='cuda:3'), covar=tensor([0.0764, 0.1438, 0.0193, 0.0406, 0.0958, 0.1172, 0.0427, 0.0289], device='cuda:3'), in_proj_covar=tensor([0.0152, 0.0279, 0.0167, 0.0184, 0.0193, 0.0195, 0.0198, 0.0208], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 20:26:09,151 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2377, 5.3775, 5.5966, 5.7288, 5.2790, 6.1816, 5.8773, 5.5141], device='cuda:3'), covar=tensor([0.1098, 0.0614, 0.0779, 0.0712, 0.1277, 0.0716, 0.0659, 0.1529], device='cuda:3'), in_proj_covar=tensor([0.0369, 0.0304, 0.0326, 0.0330, 0.0343, 0.0442, 0.0295, 0.0431], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 20:26:10,305 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93587.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:26:19,251 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.40 vs. limit=5.0 2023-03-09 20:26:26,599 INFO [train.py:898] (3/4) Epoch 26, batch 2750, loss[loss=0.1524, simple_loss=0.251, pruned_loss=0.02685, over 18361.00 frames. ], tot_loss[loss=0.1576, simple_loss=0.2483, pruned_loss=0.03346, over 3594061.18 frames. ], batch size: 55, lr: 4.30e-03, grad_scale: 16.0 2023-03-09 20:26:29,047 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=93603.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:26:29,919 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.721e+02 2.428e+02 2.838e+02 3.417e+02 1.067e+03, threshold=5.676e+02, percent-clipped=2.0 2023-03-09 20:26:58,973 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.6169, 6.0686, 5.6742, 5.9552, 5.7873, 5.5915, 6.2053, 6.1269], device='cuda:3'), covar=tensor([0.1156, 0.0842, 0.0436, 0.0682, 0.1334, 0.0712, 0.0586, 0.0721], device='cuda:3'), in_proj_covar=tensor([0.0628, 0.0554, 0.0398, 0.0574, 0.0773, 0.0571, 0.0785, 0.0598], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 20:27:02,426 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93631.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:27:06,787 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=93635.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:27:25,262 INFO [train.py:898] (3/4) Epoch 26, batch 2800, loss[loss=0.1376, simple_loss=0.2234, pruned_loss=0.02589, over 18171.00 frames. ], tot_loss[loss=0.1573, simple_loss=0.2481, pruned_loss=0.03323, over 3583427.78 frames. ], batch size: 44, lr: 4.30e-03, grad_scale: 16.0 2023-03-09 20:27:43,730 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93666.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:27:58,607 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=93679.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:28:13,477 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93692.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:28:23,641 INFO [train.py:898] (3/4) Epoch 26, batch 2850, loss[loss=0.1387, simple_loss=0.2305, pruned_loss=0.02342, over 18269.00 frames. ], tot_loss[loss=0.1573, simple_loss=0.2482, pruned_loss=0.03318, over 3588686.99 frames. ], batch size: 47, lr: 4.30e-03, grad_scale: 16.0 2023-03-09 20:28:27,051 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.715e+02 2.471e+02 3.010e+02 3.579e+02 6.367e+02, threshold=6.020e+02, percent-clipped=3.0 2023-03-09 20:28:39,127 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=93714.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:28:54,452 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-09 20:29:00,508 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8467, 4.6075, 4.6560, 3.5239, 3.8918, 3.6601, 2.8450, 2.6136], device='cuda:3'), covar=tensor([0.0238, 0.0143, 0.0080, 0.0303, 0.0344, 0.0222, 0.0677, 0.0840], device='cuda:3'), in_proj_covar=tensor([0.0074, 0.0064, 0.0068, 0.0071, 0.0092, 0.0070, 0.0079, 0.0085], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 20:29:07,610 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-09 20:29:09,323 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=93740.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:29:21,126 INFO [train.py:898] (3/4) Epoch 26, batch 2900, loss[loss=0.1376, simple_loss=0.2224, pruned_loss=0.02645, over 18507.00 frames. ], tot_loss[loss=0.1576, simple_loss=0.2485, pruned_loss=0.03328, over 3589586.13 frames. ], batch size: 47, lr: 4.30e-03, grad_scale: 16.0 2023-03-09 20:29:21,951 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1150, 5.2415, 5.3854, 5.4850, 5.0949, 5.9923, 5.6066, 5.2980], device='cuda:3'), covar=tensor([0.1154, 0.0662, 0.0810, 0.0959, 0.1416, 0.0733, 0.0733, 0.1710], device='cuda:3'), in_proj_covar=tensor([0.0366, 0.0302, 0.0322, 0.0330, 0.0341, 0.0437, 0.0293, 0.0430], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 20:29:29,437 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.79 vs. limit=5.0 2023-03-09 20:29:44,650 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=93770.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:29:45,773 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93771.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:30:01,058 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6409, 3.4457, 2.4483, 4.4044, 3.1804, 4.1865, 2.5987, 3.9873], device='cuda:3'), covar=tensor([0.0597, 0.0802, 0.1252, 0.0398, 0.0757, 0.0275, 0.1096, 0.0376], device='cuda:3'), in_proj_covar=tensor([0.0222, 0.0231, 0.0195, 0.0295, 0.0197, 0.0270, 0.0206, 0.0206], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 20:30:19,938 INFO [train.py:898] (3/4) Epoch 26, batch 2950, loss[loss=0.1461, simple_loss=0.2424, pruned_loss=0.02488, over 18368.00 frames. ], tot_loss[loss=0.1579, simple_loss=0.2487, pruned_loss=0.03358, over 3577999.36 frames. ], batch size: 55, lr: 4.30e-03, grad_scale: 16.0 2023-03-09 20:30:23,992 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.766e+02 2.413e+02 2.765e+02 3.459e+02 6.770e+02, threshold=5.531e+02, percent-clipped=1.0 2023-03-09 20:30:41,696 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=93819.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:30:56,251 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=93831.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:31:14,225 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4201, 4.4485, 4.4977, 4.2590, 4.3374, 4.2989, 4.5366, 4.5391], device='cuda:3'), covar=tensor([0.0084, 0.0073, 0.0069, 0.0115, 0.0068, 0.0152, 0.0073, 0.0090], device='cuda:3'), in_proj_covar=tensor([0.0099, 0.0074, 0.0079, 0.0099, 0.0078, 0.0108, 0.0091, 0.0090], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 20:31:18,494 INFO [train.py:898] (3/4) Epoch 26, batch 3000, loss[loss=0.1951, simple_loss=0.2745, pruned_loss=0.05779, over 12947.00 frames. ], tot_loss[loss=0.1579, simple_loss=0.2488, pruned_loss=0.03356, over 3568661.65 frames. ], batch size: 130, lr: 4.29e-03, grad_scale: 16.0 2023-03-09 20:31:18,494 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 20:31:30,353 INFO [train.py:932] (3/4) Epoch 26, validation: loss=0.15, simple_loss=0.2481, pruned_loss=0.02599, over 944034.00 frames. 2023-03-09 20:31:30,354 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 20:32:13,320 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6344, 3.1205, 4.3766, 3.6950, 2.9208, 4.5512, 3.9915, 2.8033], device='cuda:3'), covar=tensor([0.0532, 0.1219, 0.0269, 0.0426, 0.1386, 0.0275, 0.0496, 0.1028], device='cuda:3'), in_proj_covar=tensor([0.0220, 0.0244, 0.0229, 0.0172, 0.0226, 0.0220, 0.0256, 0.0202], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 20:32:28,919 INFO [train.py:898] (3/4) Epoch 26, batch 3050, loss[loss=0.159, simple_loss=0.2542, pruned_loss=0.03185, over 18551.00 frames. ], tot_loss[loss=0.1577, simple_loss=0.2482, pruned_loss=0.03353, over 3574937.50 frames. ], batch size: 54, lr: 4.29e-03, grad_scale: 16.0 2023-03-09 20:32:31,517 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7030, 4.3762, 4.3643, 3.3745, 3.7270, 3.3829, 2.5645, 2.5074], device='cuda:3'), covar=tensor([0.0260, 0.0155, 0.0095, 0.0331, 0.0373, 0.0256, 0.0749, 0.0865], device='cuda:3'), in_proj_covar=tensor([0.0074, 0.0063, 0.0068, 0.0071, 0.0091, 0.0070, 0.0078, 0.0085], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0006, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 20:32:32,225 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.657e+02 2.555e+02 3.073e+02 3.527e+02 8.787e+02, threshold=6.146e+02, percent-clipped=2.0 2023-03-09 20:33:27,973 INFO [train.py:898] (3/4) Epoch 26, batch 3100, loss[loss=0.1453, simple_loss=0.2315, pruned_loss=0.02958, over 18334.00 frames. ], tot_loss[loss=0.157, simple_loss=0.2477, pruned_loss=0.03317, over 3571616.02 frames. ], batch size: 46, lr: 4.29e-03, grad_scale: 16.0 2023-03-09 20:34:03,831 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=93981.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:34:31,193 INFO [train.py:898] (3/4) Epoch 26, batch 3150, loss[loss=0.1423, simple_loss=0.2256, pruned_loss=0.02948, over 18490.00 frames. ], tot_loss[loss=0.1571, simple_loss=0.2476, pruned_loss=0.03329, over 3566217.29 frames. ], batch size: 44, lr: 4.29e-03, grad_scale: 16.0 2023-03-09 20:34:34,551 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.522e+02 2.445e+02 2.866e+02 3.470e+02 6.892e+02, threshold=5.732e+02, percent-clipped=1.0 2023-03-09 20:34:55,180 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8461, 3.7716, 3.6512, 3.2924, 3.6397, 2.9980, 3.0109, 3.9160], device='cuda:3'), covar=tensor([0.0065, 0.0102, 0.0078, 0.0125, 0.0086, 0.0191, 0.0206, 0.0051], device='cuda:3'), in_proj_covar=tensor([0.0151, 0.0171, 0.0143, 0.0192, 0.0151, 0.0185, 0.0189, 0.0130], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 20:35:19,237 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=94042.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:35:29,133 INFO [train.py:898] (3/4) Epoch 26, batch 3200, loss[loss=0.1607, simple_loss=0.253, pruned_loss=0.03422, over 18572.00 frames. ], tot_loss[loss=0.1571, simple_loss=0.2476, pruned_loss=0.03327, over 3575854.71 frames. ], batch size: 54, lr: 4.29e-03, grad_scale: 16.0 2023-03-09 20:36:27,601 INFO [train.py:898] (3/4) Epoch 26, batch 3250, loss[loss=0.1575, simple_loss=0.2504, pruned_loss=0.03226, over 18339.00 frames. ], tot_loss[loss=0.1567, simple_loss=0.2474, pruned_loss=0.03306, over 3577565.51 frames. ], batch size: 56, lr: 4.29e-03, grad_scale: 16.0 2023-03-09 20:36:31,044 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.794e+02 2.553e+02 3.024e+02 3.724e+02 8.014e+02, threshold=6.047e+02, percent-clipped=2.0 2023-03-09 20:36:50,725 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-09 20:36:57,630 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=94126.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:37:26,333 INFO [train.py:898] (3/4) Epoch 26, batch 3300, loss[loss=0.1322, simple_loss=0.2195, pruned_loss=0.02243, over 18512.00 frames. ], tot_loss[loss=0.1562, simple_loss=0.2468, pruned_loss=0.03283, over 3591770.03 frames. ], batch size: 44, lr: 4.29e-03, grad_scale: 16.0 2023-03-09 20:38:24,388 INFO [train.py:898] (3/4) Epoch 26, batch 3350, loss[loss=0.1421, simple_loss=0.2361, pruned_loss=0.02401, over 18280.00 frames. ], tot_loss[loss=0.1565, simple_loss=0.2474, pruned_loss=0.03285, over 3596162.54 frames. ], batch size: 49, lr: 4.29e-03, grad_scale: 8.0 2023-03-09 20:38:28,983 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.721e+02 2.569e+02 3.071e+02 3.653e+02 6.732e+02, threshold=6.142e+02, percent-clipped=1.0 2023-03-09 20:38:50,855 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=94223.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 20:39:20,245 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7787, 4.4182, 4.4276, 3.2932, 3.7028, 3.3775, 2.6056, 2.4850], device='cuda:3'), covar=tensor([0.0228, 0.0150, 0.0079, 0.0361, 0.0356, 0.0261, 0.0732, 0.0858], device='cuda:3'), in_proj_covar=tensor([0.0074, 0.0064, 0.0068, 0.0072, 0.0092, 0.0071, 0.0079, 0.0086], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006], device='cuda:3') 2023-03-09 20:39:23,053 INFO [train.py:898] (3/4) Epoch 26, batch 3400, loss[loss=0.1354, simple_loss=0.2164, pruned_loss=0.02725, over 18489.00 frames. ], tot_loss[loss=0.1563, simple_loss=0.2469, pruned_loss=0.03286, over 3590918.06 frames. ], batch size: 44, lr: 4.29e-03, grad_scale: 8.0 2023-03-09 20:39:35,961 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0726, 5.5830, 5.1966, 5.3593, 5.2567, 5.0133, 5.6392, 5.5923], device='cuda:3'), covar=tensor([0.1154, 0.0772, 0.0666, 0.0739, 0.1396, 0.0754, 0.0621, 0.0698], device='cuda:3'), in_proj_covar=tensor([0.0628, 0.0556, 0.0401, 0.0578, 0.0776, 0.0573, 0.0790, 0.0599], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 20:40:02,568 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=94284.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 20:40:06,772 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8344, 5.3213, 4.9503, 5.1517, 5.0278, 4.8393, 5.4131, 5.3358], device='cuda:3'), covar=tensor([0.1267, 0.0850, 0.1031, 0.0781, 0.1345, 0.0760, 0.0652, 0.0784], device='cuda:3'), in_proj_covar=tensor([0.0632, 0.0561, 0.0404, 0.0582, 0.0780, 0.0577, 0.0794, 0.0603], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 20:40:14,220 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0663, 5.5004, 5.5303, 5.4862, 5.0071, 5.4359, 4.8905, 5.4187], device='cuda:3'), covar=tensor([0.0244, 0.0248, 0.0154, 0.0370, 0.0405, 0.0193, 0.0982, 0.0262], device='cuda:3'), in_proj_covar=tensor([0.0233, 0.0274, 0.0275, 0.0352, 0.0284, 0.0284, 0.0319, 0.0278], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0008, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 20:40:21,758 INFO [train.py:898] (3/4) Epoch 26, batch 3450, loss[loss=0.1561, simple_loss=0.2519, pruned_loss=0.03018, over 16241.00 frames. ], tot_loss[loss=0.1568, simple_loss=0.2475, pruned_loss=0.03308, over 3597540.70 frames. ], batch size: 95, lr: 4.28e-03, grad_scale: 8.0 2023-03-09 20:40:22,122 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9621, 4.9459, 5.0605, 4.6478, 4.7827, 4.7810, 5.0248, 5.0676], device='cuda:3'), covar=tensor([0.0086, 0.0093, 0.0074, 0.0146, 0.0079, 0.0185, 0.0102, 0.0113], device='cuda:3'), in_proj_covar=tensor([0.0099, 0.0074, 0.0079, 0.0099, 0.0078, 0.0108, 0.0091, 0.0091], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 20:40:26,281 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.946e+02 2.477e+02 2.921e+02 3.603e+02 6.240e+02, threshold=5.842e+02, percent-clipped=1.0 2023-03-09 20:41:04,308 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=94337.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:41:17,087 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=94348.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:41:20,138 INFO [train.py:898] (3/4) Epoch 26, batch 3500, loss[loss=0.2017, simple_loss=0.2806, pruned_loss=0.06143, over 12707.00 frames. ], tot_loss[loss=0.1565, simple_loss=0.2469, pruned_loss=0.03302, over 3596625.14 frames. ], batch size: 130, lr: 4.28e-03, grad_scale: 8.0 2023-03-09 20:41:54,364 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3518, 2.7188, 2.4770, 2.6789, 3.4694, 3.3363, 3.0108, 2.7358], device='cuda:3'), covar=tensor([0.0186, 0.0279, 0.0576, 0.0407, 0.0202, 0.0190, 0.0369, 0.0440], device='cuda:3'), in_proj_covar=tensor([0.0145, 0.0145, 0.0168, 0.0165, 0.0142, 0.0127, 0.0162, 0.0165], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 20:42:15,620 INFO [train.py:898] (3/4) Epoch 26, batch 3550, loss[loss=0.1683, simple_loss=0.262, pruned_loss=0.03733, over 18247.00 frames. ], tot_loss[loss=0.1568, simple_loss=0.2475, pruned_loss=0.03311, over 3594345.80 frames. ], batch size: 60, lr: 4.28e-03, grad_scale: 8.0 2023-03-09 20:42:20,517 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.890e+02 2.549e+02 2.986e+02 3.546e+02 5.816e+02, threshold=5.972e+02, percent-clipped=0.0 2023-03-09 20:42:25,185 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=94409.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 20:42:37,973 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9375, 3.7060, 5.1889, 2.9989, 4.5263, 2.6501, 3.1163, 1.8322], device='cuda:3'), covar=tensor([0.1209, 0.0977, 0.0148, 0.1010, 0.0490, 0.2708, 0.2730, 0.2333], device='cuda:3'), in_proj_covar=tensor([0.0228, 0.0251, 0.0223, 0.0207, 0.0265, 0.0278, 0.0335, 0.0244], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 20:42:43,022 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=94426.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:42:57,028 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7268, 5.2238, 5.2254, 5.3216, 4.6620, 5.1365, 4.4886, 5.1249], device='cuda:3'), covar=tensor([0.0305, 0.0387, 0.0259, 0.0464, 0.0470, 0.0292, 0.1374, 0.0359], device='cuda:3'), in_proj_covar=tensor([0.0234, 0.0276, 0.0276, 0.0353, 0.0286, 0.0286, 0.0320, 0.0278], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0008, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 20:43:09,310 INFO [train.py:898] (3/4) Epoch 26, batch 3600, loss[loss=0.1586, simple_loss=0.2488, pruned_loss=0.03423, over 17016.00 frames. ], tot_loss[loss=0.1571, simple_loss=0.248, pruned_loss=0.03311, over 3602856.06 frames. ], batch size: 78, lr: 4.28e-03, grad_scale: 8.0 2023-03-09 20:43:34,267 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=94474.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:44:11,700 INFO [train.py:898] (3/4) Epoch 27, batch 0, loss[loss=0.151, simple_loss=0.2451, pruned_loss=0.02846, over 18253.00 frames. ], tot_loss[loss=0.151, simple_loss=0.2451, pruned_loss=0.02846, over 18253.00 frames. ], batch size: 47, lr: 4.20e-03, grad_scale: 8.0 2023-03-09 20:44:11,700 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 20:44:23,641 INFO [train.py:932] (3/4) Epoch 27, validation: loss=0.1494, simple_loss=0.2481, pruned_loss=0.02532, over 944034.00 frames. 2023-03-09 20:44:23,642 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 20:44:49,667 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.900e+02 2.525e+02 3.009e+02 3.747e+02 9.938e+02, threshold=6.019e+02, percent-clipped=2.0 2023-03-09 20:45:22,318 INFO [train.py:898] (3/4) Epoch 27, batch 50, loss[loss=0.1467, simple_loss=0.2376, pruned_loss=0.02788, over 18536.00 frames. ], tot_loss[loss=0.1565, simple_loss=0.2475, pruned_loss=0.03272, over 806012.41 frames. ], batch size: 49, lr: 4.20e-03, grad_scale: 8.0 2023-03-09 20:45:30,708 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.65 vs. limit=2.0 2023-03-09 20:45:47,012 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-09 20:46:13,896 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=94579.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 20:46:20,481 INFO [train.py:898] (3/4) Epoch 27, batch 100, loss[loss=0.1287, simple_loss=0.2128, pruned_loss=0.02232, over 18503.00 frames. ], tot_loss[loss=0.1552, simple_loss=0.246, pruned_loss=0.03218, over 1420221.27 frames. ], batch size: 44, lr: 4.20e-03, grad_scale: 8.0 2023-03-09 20:46:25,573 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9311, 3.7255, 5.1272, 4.5171, 3.5794, 3.1073, 4.6413, 5.3707], device='cuda:3'), covar=tensor([0.0829, 0.1686, 0.0218, 0.0401, 0.0898, 0.1235, 0.0361, 0.0278], device='cuda:3'), in_proj_covar=tensor([0.0156, 0.0286, 0.0170, 0.0188, 0.0198, 0.0199, 0.0202, 0.0213], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 20:46:30,394 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-09 20:46:46,510 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.709e+02 2.657e+02 3.166e+02 3.640e+02 6.599e+02, threshold=6.333e+02, percent-clipped=4.0 2023-03-09 20:47:19,768 INFO [train.py:898] (3/4) Epoch 27, batch 150, loss[loss=0.1514, simple_loss=0.2408, pruned_loss=0.03106, over 18233.00 frames. ], tot_loss[loss=0.1567, simple_loss=0.2476, pruned_loss=0.03287, over 1912182.77 frames. ], batch size: 60, lr: 4.19e-03, grad_scale: 8.0 2023-03-09 20:47:22,333 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=94637.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:47:22,841 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-09 20:47:31,201 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4192, 3.2584, 2.1606, 4.1434, 2.9344, 3.8287, 2.3914, 3.7172], device='cuda:3'), covar=tensor([0.0646, 0.0901, 0.1438, 0.0510, 0.0837, 0.0317, 0.1269, 0.0386], device='cuda:3'), in_proj_covar=tensor([0.0224, 0.0232, 0.0195, 0.0296, 0.0197, 0.0270, 0.0206, 0.0207], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 20:47:32,552 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.58 vs. limit=2.0 2023-03-09 20:47:36,331 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-09 20:47:37,027 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4240, 5.9679, 5.5638, 5.7508, 5.5362, 5.3390, 6.0134, 5.9612], device='cuda:3'), covar=tensor([0.1327, 0.0796, 0.0478, 0.0710, 0.1409, 0.0755, 0.0633, 0.0709], device='cuda:3'), in_proj_covar=tensor([0.0626, 0.0555, 0.0401, 0.0574, 0.0773, 0.0571, 0.0787, 0.0596], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 20:47:50,546 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3330, 5.3347, 4.9765, 5.2296, 5.2633, 4.6849, 5.1373, 4.9441], device='cuda:3'), covar=tensor([0.0464, 0.0429, 0.1383, 0.0906, 0.0639, 0.0453, 0.0479, 0.1121], device='cuda:3'), in_proj_covar=tensor([0.0510, 0.0576, 0.0717, 0.0447, 0.0472, 0.0523, 0.0558, 0.0690], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 20:47:55,889 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6065, 5.0847, 5.0857, 5.1182, 4.5868, 5.0092, 4.5052, 4.9941], device='cuda:3'), covar=tensor([0.0290, 0.0336, 0.0223, 0.0523, 0.0439, 0.0257, 0.1097, 0.0342], device='cuda:3'), in_proj_covar=tensor([0.0236, 0.0278, 0.0279, 0.0355, 0.0287, 0.0288, 0.0320, 0.0280], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0008, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 20:48:17,963 INFO [train.py:898] (3/4) Epoch 27, batch 200, loss[loss=0.181, simple_loss=0.2654, pruned_loss=0.04826, over 18287.00 frames. ], tot_loss[loss=0.1579, simple_loss=0.249, pruned_loss=0.03342, over 2278608.79 frames. ], batch size: 57, lr: 4.19e-03, grad_scale: 8.0 2023-03-09 20:48:18,121 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=94685.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:48:18,236 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2152, 5.1966, 4.8242, 5.1359, 5.1045, 4.5643, 5.0122, 4.7796], device='cuda:3'), covar=tensor([0.0422, 0.0427, 0.1383, 0.0737, 0.0661, 0.0418, 0.0470, 0.1109], device='cuda:3'), in_proj_covar=tensor([0.0508, 0.0574, 0.0715, 0.0446, 0.0471, 0.0522, 0.0557, 0.0688], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 20:48:39,868 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=94704.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 20:48:41,886 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.913e+02 2.647e+02 3.071e+02 3.764e+02 1.112e+03, threshold=6.143e+02, percent-clipped=3.0 2023-03-09 20:48:48,869 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7451, 4.8218, 4.8708, 4.5560, 4.5636, 4.6537, 4.9332, 4.9168], device='cuda:3'), covar=tensor([0.0079, 0.0061, 0.0062, 0.0121, 0.0073, 0.0146, 0.0074, 0.0091], device='cuda:3'), in_proj_covar=tensor([0.0099, 0.0074, 0.0079, 0.0099, 0.0079, 0.0108, 0.0091, 0.0090], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 20:49:16,149 INFO [train.py:898] (3/4) Epoch 27, batch 250, loss[loss=0.1333, simple_loss=0.2157, pruned_loss=0.0254, over 18397.00 frames. ], tot_loss[loss=0.1569, simple_loss=0.2478, pruned_loss=0.03302, over 2576551.66 frames. ], batch size: 42, lr: 4.19e-03, grad_scale: 8.0 2023-03-09 20:49:20,977 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9204, 5.4287, 2.9472, 5.3597, 5.1932, 5.4663, 5.3039, 2.8053], device='cuda:3'), covar=tensor([0.0233, 0.0117, 0.0734, 0.0071, 0.0072, 0.0094, 0.0093, 0.0936], device='cuda:3'), in_proj_covar=tensor([0.0092, 0.0083, 0.0098, 0.0098, 0.0089, 0.0079, 0.0086, 0.0097], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 20:49:35,861 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7867, 3.1004, 2.7625, 2.9763, 3.8362, 3.7754, 3.3162, 3.0649], device='cuda:3'), covar=tensor([0.0170, 0.0276, 0.0533, 0.0389, 0.0170, 0.0144, 0.0381, 0.0389], device='cuda:3'), in_proj_covar=tensor([0.0148, 0.0146, 0.0171, 0.0167, 0.0144, 0.0129, 0.0165, 0.0168], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0003, 0.0003, 0.0004], device='cuda:3') 2023-03-09 20:50:14,304 INFO [train.py:898] (3/4) Epoch 27, batch 300, loss[loss=0.1428, simple_loss=0.2266, pruned_loss=0.02945, over 17716.00 frames. ], tot_loss[loss=0.1573, simple_loss=0.2483, pruned_loss=0.03316, over 2806106.48 frames. ], batch size: 39, lr: 4.19e-03, grad_scale: 8.0 2023-03-09 20:50:38,014 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.940e+02 2.501e+02 2.879e+02 3.354e+02 8.790e+02, threshold=5.757e+02, percent-clipped=2.0 2023-03-09 20:51:12,771 INFO [train.py:898] (3/4) Epoch 27, batch 350, loss[loss=0.1478, simple_loss=0.2308, pruned_loss=0.03237, over 18266.00 frames. ], tot_loss[loss=0.1568, simple_loss=0.2475, pruned_loss=0.03309, over 2982323.18 frames. ], batch size: 47, lr: 4.19e-03, grad_scale: 8.0 2023-03-09 20:51:45,962 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6860, 3.0804, 4.4869, 3.7567, 3.0005, 4.7818, 3.9879, 2.9798], device='cuda:3'), covar=tensor([0.0556, 0.1371, 0.0323, 0.0487, 0.1414, 0.0251, 0.0611, 0.0969], device='cuda:3'), in_proj_covar=tensor([0.0222, 0.0245, 0.0232, 0.0172, 0.0228, 0.0219, 0.0258, 0.0201], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 20:52:04,507 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=94879.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 20:52:11,487 INFO [train.py:898] (3/4) Epoch 27, batch 400, loss[loss=0.1737, simple_loss=0.2627, pruned_loss=0.04229, over 18335.00 frames. ], tot_loss[loss=0.1568, simple_loss=0.2474, pruned_loss=0.03308, over 3119105.64 frames. ], batch size: 56, lr: 4.19e-03, grad_scale: 8.0 2023-03-09 20:52:35,456 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.622e+02 2.561e+02 2.993e+02 3.675e+02 9.257e+02, threshold=5.986e+02, percent-clipped=2.0 2023-03-09 20:53:00,994 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=94927.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 20:53:05,781 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9809, 3.5862, 5.0380, 4.3376, 3.4111, 3.1180, 4.5049, 5.2748], device='cuda:3'), covar=tensor([0.0780, 0.1674, 0.0218, 0.0435, 0.0956, 0.1180, 0.0377, 0.0245], device='cuda:3'), in_proj_covar=tensor([0.0155, 0.0285, 0.0170, 0.0188, 0.0197, 0.0198, 0.0201, 0.0213], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 20:53:10,431 INFO [train.py:898] (3/4) Epoch 27, batch 450, loss[loss=0.1655, simple_loss=0.2605, pruned_loss=0.03523, over 18138.00 frames. ], tot_loss[loss=0.1568, simple_loss=0.2475, pruned_loss=0.03309, over 3227470.87 frames. ], batch size: 62, lr: 4.19e-03, grad_scale: 8.0 2023-03-09 20:54:08,450 INFO [train.py:898] (3/4) Epoch 27, batch 500, loss[loss=0.1343, simple_loss=0.2187, pruned_loss=0.02494, over 18486.00 frames. ], tot_loss[loss=0.1572, simple_loss=0.248, pruned_loss=0.03322, over 3316681.52 frames. ], batch size: 44, lr: 4.19e-03, grad_scale: 8.0 2023-03-09 20:54:31,125 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=95004.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:54:33,124 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.646e+02 2.379e+02 2.893e+02 3.506e+02 5.482e+02, threshold=5.786e+02, percent-clipped=0.0 2023-03-09 20:54:45,913 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5173, 5.4779, 5.1242, 5.4781, 5.3912, 4.8084, 5.3343, 5.0313], device='cuda:3'), covar=tensor([0.0409, 0.0422, 0.1162, 0.0671, 0.0659, 0.0413, 0.0420, 0.1088], device='cuda:3'), in_proj_covar=tensor([0.0509, 0.0580, 0.0720, 0.0449, 0.0473, 0.0527, 0.0562, 0.0696], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 20:55:06,350 INFO [train.py:898] (3/4) Epoch 27, batch 550, loss[loss=0.1596, simple_loss=0.2529, pruned_loss=0.03322, over 18566.00 frames. ], tot_loss[loss=0.1571, simple_loss=0.248, pruned_loss=0.03314, over 3381559.01 frames. ], batch size: 54, lr: 4.19e-03, grad_scale: 8.0 2023-03-09 20:55:27,021 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=95052.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:56:04,296 INFO [train.py:898] (3/4) Epoch 27, batch 600, loss[loss=0.1507, simple_loss=0.2456, pruned_loss=0.02789, over 18478.00 frames. ], tot_loss[loss=0.1573, simple_loss=0.2482, pruned_loss=0.03316, over 3417201.64 frames. ], batch size: 53, lr: 4.19e-03, grad_scale: 8.0 2023-03-09 20:56:28,239 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.70 vs. limit=5.0 2023-03-09 20:56:28,803 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95105.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 20:56:29,338 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.684e+02 2.616e+02 3.098e+02 3.741e+02 7.084e+02, threshold=6.196e+02, percent-clipped=4.0 2023-03-09 20:56:37,905 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9465, 3.9287, 4.8922, 4.4890, 3.2183, 3.1477, 4.5849, 5.2978], device='cuda:3'), covar=tensor([0.0737, 0.1414, 0.0332, 0.0357, 0.1026, 0.1109, 0.0358, 0.0239], device='cuda:3'), in_proj_covar=tensor([0.0155, 0.0284, 0.0170, 0.0187, 0.0197, 0.0198, 0.0200, 0.0213], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 20:56:48,459 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-09 20:57:02,895 INFO [train.py:898] (3/4) Epoch 27, batch 650, loss[loss=0.1358, simple_loss=0.2262, pruned_loss=0.02271, over 18489.00 frames. ], tot_loss[loss=0.1575, simple_loss=0.2482, pruned_loss=0.03346, over 3442417.62 frames. ], batch size: 47, lr: 4.18e-03, grad_scale: 8.0 2023-03-09 20:57:07,814 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-09 20:57:40,204 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95166.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 20:58:02,060 INFO [train.py:898] (3/4) Epoch 27, batch 700, loss[loss=0.1946, simple_loss=0.2644, pruned_loss=0.06237, over 12758.00 frames. ], tot_loss[loss=0.1585, simple_loss=0.2491, pruned_loss=0.03391, over 3476411.85 frames. ], batch size: 129, lr: 4.18e-03, grad_scale: 8.0 2023-03-09 20:58:28,147 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.730e+02 2.528e+02 2.858e+02 3.402e+02 6.027e+02, threshold=5.717e+02, percent-clipped=1.0 2023-03-09 20:58:40,635 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1838, 3.9141, 5.3405, 2.9541, 4.7235, 2.6506, 3.1612, 1.9497], device='cuda:3'), covar=tensor([0.1050, 0.0860, 0.0156, 0.1003, 0.0450, 0.2726, 0.2680, 0.2275], device='cuda:3'), in_proj_covar=tensor([0.0231, 0.0253, 0.0227, 0.0209, 0.0268, 0.0282, 0.0338, 0.0247], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 20:58:54,221 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8864, 4.5736, 4.6679, 3.4312, 3.7855, 3.6222, 2.7734, 2.5022], device='cuda:3'), covar=tensor([0.0236, 0.0155, 0.0075, 0.0338, 0.0342, 0.0222, 0.0671, 0.0851], device='cuda:3'), in_proj_covar=tensor([0.0076, 0.0065, 0.0069, 0.0073, 0.0094, 0.0071, 0.0080, 0.0087], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006], device='cuda:3') 2023-03-09 20:59:01,259 INFO [train.py:898] (3/4) Epoch 27, batch 750, loss[loss=0.1869, simple_loss=0.2681, pruned_loss=0.05281, over 12695.00 frames. ], tot_loss[loss=0.1582, simple_loss=0.2488, pruned_loss=0.03382, over 3482345.86 frames. ], batch size: 129, lr: 4.18e-03, grad_scale: 8.0 2023-03-09 20:59:31,752 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95260.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 20:59:40,865 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95268.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:00:00,184 INFO [train.py:898] (3/4) Epoch 27, batch 800, loss[loss=0.1588, simple_loss=0.2518, pruned_loss=0.03291, over 17104.00 frames. ], tot_loss[loss=0.1587, simple_loss=0.2491, pruned_loss=0.03412, over 3489442.11 frames. ], batch size: 78, lr: 4.18e-03, grad_scale: 8.0 2023-03-09 21:00:18,891 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8339, 2.8962, 2.0685, 3.3224, 2.4751, 2.7755, 2.3386, 2.8726], device='cuda:3'), covar=tensor([0.0672, 0.0836, 0.1269, 0.0591, 0.0775, 0.0290, 0.1083, 0.0500], device='cuda:3'), in_proj_covar=tensor([0.0222, 0.0231, 0.0194, 0.0293, 0.0196, 0.0269, 0.0205, 0.0205], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 21:00:25,360 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.793e+02 2.511e+02 2.986e+02 3.454e+02 7.557e+02, threshold=5.973e+02, percent-clipped=4.0 2023-03-09 21:00:43,549 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95321.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:00:52,445 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95329.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:00:58,806 INFO [train.py:898] (3/4) Epoch 27, batch 850, loss[loss=0.1582, simple_loss=0.2483, pruned_loss=0.03409, over 18407.00 frames. ], tot_loss[loss=0.1578, simple_loss=0.2481, pruned_loss=0.03372, over 3512842.55 frames. ], batch size: 48, lr: 4.18e-03, grad_scale: 8.0 2023-03-09 21:01:04,260 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95339.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:01:23,799 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0029, 5.6256, 3.3338, 5.4128, 5.3081, 5.6411, 5.4921, 3.0272], device='cuda:3'), covar=tensor([0.0237, 0.0052, 0.0564, 0.0060, 0.0069, 0.0059, 0.0074, 0.0852], device='cuda:3'), in_proj_covar=tensor([0.0093, 0.0084, 0.0099, 0.0099, 0.0090, 0.0080, 0.0087, 0.0098], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 21:01:54,046 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.75 vs. limit=5.0 2023-03-09 21:01:57,828 INFO [train.py:898] (3/4) Epoch 27, batch 900, loss[loss=0.167, simple_loss=0.2589, pruned_loss=0.03758, over 18456.00 frames. ], tot_loss[loss=0.157, simple_loss=0.2475, pruned_loss=0.0332, over 3538100.14 frames. ], batch size: 59, lr: 4.18e-03, grad_scale: 8.0 2023-03-09 21:02:16,689 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95400.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:02:22,231 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6044, 3.0830, 4.3678, 3.6533, 2.9675, 4.6844, 3.9751, 2.9695], device='cuda:3'), covar=tensor([0.0538, 0.1327, 0.0321, 0.0468, 0.1415, 0.0183, 0.0554, 0.0923], device='cuda:3'), in_proj_covar=tensor([0.0217, 0.0241, 0.0227, 0.0169, 0.0223, 0.0215, 0.0253, 0.0196], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 21:02:24,176 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.646e+02 2.410e+02 2.846e+02 3.556e+02 5.953e+02, threshold=5.693e+02, percent-clipped=0.0 2023-03-09 21:02:34,356 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-09 21:02:46,353 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5578, 6.1466, 5.7197, 5.9726, 5.7559, 5.5940, 6.1879, 6.1549], device='cuda:3'), covar=tensor([0.1285, 0.0698, 0.0400, 0.0647, 0.1328, 0.0662, 0.0536, 0.0622], device='cuda:3'), in_proj_covar=tensor([0.0629, 0.0555, 0.0399, 0.0575, 0.0771, 0.0575, 0.0786, 0.0598], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 21:02:57,593 INFO [train.py:898] (3/4) Epoch 27, batch 950, loss[loss=0.1494, simple_loss=0.2342, pruned_loss=0.03225, over 18561.00 frames. ], tot_loss[loss=0.1567, simple_loss=0.2474, pruned_loss=0.03303, over 3557014.67 frames. ], batch size: 45, lr: 4.18e-03, grad_scale: 8.0 2023-03-09 21:03:28,486 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=95461.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 21:03:41,820 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6816, 2.2907, 2.5703, 2.6221, 2.9003, 4.4792, 4.4598, 3.2830], device='cuda:3'), covar=tensor([0.1932, 0.2607, 0.2950, 0.2043, 0.2700, 0.0319, 0.0423, 0.1021], device='cuda:3'), in_proj_covar=tensor([0.0328, 0.0363, 0.0408, 0.0291, 0.0398, 0.0265, 0.0305, 0.0271], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 21:03:55,817 INFO [train.py:898] (3/4) Epoch 27, batch 1000, loss[loss=0.1526, simple_loss=0.2501, pruned_loss=0.02753, over 17942.00 frames. ], tot_loss[loss=0.1563, simple_loss=0.2469, pruned_loss=0.03289, over 3561007.83 frames. ], batch size: 65, lr: 4.18e-03, grad_scale: 8.0 2023-03-09 21:04:09,241 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-09 21:04:14,375 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3675, 5.9249, 5.5016, 5.7175, 5.5050, 5.3227, 5.9353, 5.9494], device='cuda:3'), covar=tensor([0.1193, 0.0745, 0.0566, 0.0674, 0.1377, 0.0717, 0.0564, 0.0646], device='cuda:3'), in_proj_covar=tensor([0.0631, 0.0557, 0.0402, 0.0578, 0.0775, 0.0579, 0.0788, 0.0598], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 21:04:18,916 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5979, 3.3989, 2.2387, 4.3617, 3.0722, 4.0242, 2.5806, 3.8251], device='cuda:3'), covar=tensor([0.0578, 0.0782, 0.1424, 0.0405, 0.0804, 0.0316, 0.1103, 0.0442], device='cuda:3'), in_proj_covar=tensor([0.0224, 0.0233, 0.0196, 0.0297, 0.0198, 0.0272, 0.0208, 0.0208], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 21:04:19,642 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.888e+02 2.625e+02 2.944e+02 3.804e+02 7.534e+02, threshold=5.888e+02, percent-clipped=3.0 2023-03-09 21:04:53,993 INFO [train.py:898] (3/4) Epoch 27, batch 1050, loss[loss=0.1355, simple_loss=0.2207, pruned_loss=0.02517, over 18404.00 frames. ], tot_loss[loss=0.1568, simple_loss=0.2476, pruned_loss=0.03298, over 3564150.70 frames. ], batch size: 42, lr: 4.18e-03, grad_scale: 8.0 2023-03-09 21:05:53,052 INFO [train.py:898] (3/4) Epoch 27, batch 1100, loss[loss=0.1592, simple_loss=0.2441, pruned_loss=0.03718, over 18537.00 frames. ], tot_loss[loss=0.1572, simple_loss=0.248, pruned_loss=0.0332, over 3563171.15 frames. ], batch size: 49, lr: 4.17e-03, grad_scale: 8.0 2023-03-09 21:06:17,708 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.642e+02 2.398e+02 2.884e+02 3.392e+02 9.491e+02, threshold=5.768e+02, percent-clipped=3.0 2023-03-09 21:06:30,360 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=95616.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:06:31,574 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95617.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:06:39,707 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=95624.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:06:52,364 INFO [train.py:898] (3/4) Epoch 27, batch 1150, loss[loss=0.1638, simple_loss=0.2569, pruned_loss=0.03532, over 18194.00 frames. ], tot_loss[loss=0.1569, simple_loss=0.2479, pruned_loss=0.03298, over 3569869.60 frames. ], batch size: 60, lr: 4.17e-03, grad_scale: 4.0 2023-03-09 21:07:43,481 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95678.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:07:51,500 INFO [train.py:898] (3/4) Epoch 27, batch 1200, loss[loss=0.1705, simple_loss=0.2708, pruned_loss=0.03512, over 18351.00 frames. ], tot_loss[loss=0.1565, simple_loss=0.2473, pruned_loss=0.03285, over 3582807.89 frames. ], batch size: 55, lr: 4.17e-03, grad_scale: 8.0 2023-03-09 21:08:02,911 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=95695.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:08:12,444 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7685, 4.0734, 2.4250, 3.9371, 5.0331, 2.6304, 3.5759, 3.8930], device='cuda:3'), covar=tensor([0.0201, 0.1109, 0.1668, 0.0657, 0.0109, 0.1204, 0.0809, 0.0727], device='cuda:3'), in_proj_covar=tensor([0.0184, 0.0282, 0.0210, 0.0203, 0.0141, 0.0189, 0.0223, 0.0233], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 21:08:16,303 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.614e+02 2.547e+02 2.881e+02 3.580e+02 8.977e+02, threshold=5.762e+02, percent-clipped=2.0 2023-03-09 21:08:23,311 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-09 21:08:50,127 INFO [train.py:898] (3/4) Epoch 27, batch 1250, loss[loss=0.1587, simple_loss=0.2541, pruned_loss=0.03159, over 18457.00 frames. ], tot_loss[loss=0.1565, simple_loss=0.2468, pruned_loss=0.03307, over 3570910.41 frames. ], batch size: 59, lr: 4.17e-03, grad_scale: 8.0 2023-03-09 21:09:20,721 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=95761.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 21:09:45,008 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8336, 2.6318, 2.7530, 2.8631, 3.4282, 5.0790, 4.9641, 3.3136], device='cuda:3'), covar=tensor([0.2029, 0.2500, 0.2991, 0.1942, 0.2389, 0.0208, 0.0335, 0.1139], device='cuda:3'), in_proj_covar=tensor([0.0328, 0.0364, 0.0409, 0.0291, 0.0398, 0.0265, 0.0305, 0.0273], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:3') 2023-03-09 21:09:49,128 INFO [train.py:898] (3/4) Epoch 27, batch 1300, loss[loss=0.1683, simple_loss=0.2603, pruned_loss=0.03814, over 18332.00 frames. ], tot_loss[loss=0.1565, simple_loss=0.2469, pruned_loss=0.03307, over 3560523.69 frames. ], batch size: 54, lr: 4.17e-03, grad_scale: 8.0 2023-03-09 21:10:15,103 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.836e+02 2.534e+02 2.954e+02 3.886e+02 9.660e+02, threshold=5.908e+02, percent-clipped=7.0 2023-03-09 21:10:17,520 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=95809.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 21:10:21,578 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95812.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 21:10:48,199 INFO [train.py:898] (3/4) Epoch 27, batch 1350, loss[loss=0.1422, simple_loss=0.2274, pruned_loss=0.02847, over 18256.00 frames. ], tot_loss[loss=0.156, simple_loss=0.2462, pruned_loss=0.03286, over 3577654.60 frames. ], batch size: 47, lr: 4.17e-03, grad_scale: 8.0 2023-03-09 21:11:02,522 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95847.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:11:17,059 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0798, 3.7114, 5.1074, 2.9431, 4.5005, 2.6648, 3.2376, 1.7485], device='cuda:3'), covar=tensor([0.1165, 0.1014, 0.0176, 0.1006, 0.0458, 0.2539, 0.2587, 0.2407], device='cuda:3'), in_proj_covar=tensor([0.0231, 0.0254, 0.0226, 0.0209, 0.0268, 0.0280, 0.0337, 0.0246], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 21:11:25,381 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8816, 4.1244, 2.4324, 4.0635, 5.1408, 2.5067, 3.7690, 4.0365], device='cuda:3'), covar=tensor([0.0211, 0.1177, 0.1744, 0.0687, 0.0094, 0.1433, 0.0758, 0.0674], device='cuda:3'), in_proj_covar=tensor([0.0183, 0.0281, 0.0209, 0.0201, 0.0141, 0.0188, 0.0222, 0.0232], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 21:11:32,199 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95873.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 21:11:46,333 INFO [train.py:898] (3/4) Epoch 27, batch 1400, loss[loss=0.1491, simple_loss=0.2408, pruned_loss=0.0287, over 18491.00 frames. ], tot_loss[loss=0.1566, simple_loss=0.2469, pruned_loss=0.03313, over 3577327.32 frames. ], batch size: 51, lr: 4.17e-03, grad_scale: 8.0 2023-03-09 21:12:09,858 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8915, 5.3285, 5.2998, 5.3393, 4.8339, 5.2652, 4.7608, 5.2075], device='cuda:3'), covar=tensor([0.0217, 0.0278, 0.0207, 0.0426, 0.0412, 0.0206, 0.0940, 0.0340], device='cuda:3'), in_proj_covar=tensor([0.0233, 0.0275, 0.0276, 0.0354, 0.0286, 0.0285, 0.0318, 0.0280], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0008, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 21:12:11,666 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.788e+02 2.502e+02 2.884e+02 3.426e+02 1.018e+03, threshold=5.768e+02, percent-clipped=3.0 2023-03-09 21:12:13,297 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95908.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 21:12:22,281 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=95916.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:12:26,885 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.89 vs. limit=5.0 2023-03-09 21:12:32,120 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=95924.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:12:45,207 INFO [train.py:898] (3/4) Epoch 27, batch 1450, loss[loss=0.1742, simple_loss=0.2642, pruned_loss=0.04206, over 18252.00 frames. ], tot_loss[loss=0.1569, simple_loss=0.2473, pruned_loss=0.03326, over 3578698.56 frames. ], batch size: 57, lr: 4.17e-03, grad_scale: 8.0 2023-03-09 21:13:19,366 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=95964.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:13:29,130 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=95972.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:13:30,274 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=95973.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:13:38,786 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95980.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:13:44,686 INFO [train.py:898] (3/4) Epoch 27, batch 1500, loss[loss=0.1306, simple_loss=0.2151, pruned_loss=0.02308, over 18381.00 frames. ], tot_loss[loss=0.1561, simple_loss=0.2464, pruned_loss=0.0329, over 3591014.52 frames. ], batch size: 42, lr: 4.17e-03, grad_scale: 8.0 2023-03-09 21:13:45,884 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.93 vs. limit=2.0 2023-03-09 21:13:56,533 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=95995.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:14:15,334 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.678e+02 2.536e+02 3.010e+02 3.569e+02 8.325e+02, threshold=6.021e+02, percent-clipped=2.0 2023-03-09 21:14:23,820 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6902, 2.3855, 2.5561, 2.6339, 3.1212, 4.5304, 4.5025, 3.0339], device='cuda:3'), covar=tensor([0.2087, 0.2656, 0.3139, 0.2106, 0.2525, 0.0348, 0.0422, 0.1254], device='cuda:3'), in_proj_covar=tensor([0.0328, 0.0364, 0.0410, 0.0291, 0.0399, 0.0266, 0.0305, 0.0273], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:3') 2023-03-09 21:14:47,958 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3398, 5.3011, 4.9898, 5.2382, 5.2613, 4.5958, 5.1969, 4.9383], device='cuda:3'), covar=tensor([0.0458, 0.0501, 0.1393, 0.0860, 0.0596, 0.0472, 0.0460, 0.1016], device='cuda:3'), in_proj_covar=tensor([0.0518, 0.0587, 0.0729, 0.0453, 0.0475, 0.0533, 0.0567, 0.0702], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 21:14:48,703 INFO [train.py:898] (3/4) Epoch 27, batch 1550, loss[loss=0.1706, simple_loss=0.2642, pruned_loss=0.0385, over 18363.00 frames. ], tot_loss[loss=0.1559, simple_loss=0.2459, pruned_loss=0.03293, over 3596006.60 frames. ], batch size: 56, lr: 4.16e-03, grad_scale: 8.0 2023-03-09 21:14:52,408 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.97 vs. limit=5.0 2023-03-09 21:14:56,588 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=96041.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:14:58,712 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=96043.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:15:42,218 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=96080.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:15:47,453 INFO [train.py:898] (3/4) Epoch 27, batch 1600, loss[loss=0.1761, simple_loss=0.2623, pruned_loss=0.04495, over 12332.00 frames. ], tot_loss[loss=0.1562, simple_loss=0.2463, pruned_loss=0.03308, over 3587310.01 frames. ], batch size: 129, lr: 4.16e-03, grad_scale: 8.0 2023-03-09 21:16:13,695 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.626e+02 2.554e+02 2.952e+02 3.654e+02 6.643e+02, threshold=5.903e+02, percent-clipped=2.0 2023-03-09 21:16:38,530 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0123, 4.5285, 4.2467, 4.3710, 4.1281, 4.7879, 4.4966, 4.2348], device='cuda:3'), covar=tensor([0.1634, 0.1236, 0.1032, 0.0960, 0.1457, 0.1213, 0.0784, 0.1820], device='cuda:3'), in_proj_covar=tensor([0.0371, 0.0303, 0.0328, 0.0330, 0.0338, 0.0442, 0.0295, 0.0433], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 21:16:44,021 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.79 vs. limit=2.0 2023-03-09 21:16:45,548 INFO [train.py:898] (3/4) Epoch 27, batch 1650, loss[loss=0.1673, simple_loss=0.2562, pruned_loss=0.03919, over 18361.00 frames. ], tot_loss[loss=0.1568, simple_loss=0.247, pruned_loss=0.03326, over 3586340.20 frames. ], batch size: 46, lr: 4.16e-03, grad_scale: 8.0 2023-03-09 21:16:53,599 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=96141.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:17:21,279 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=96165.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:17:21,670 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.69 vs. limit=2.0 2023-03-09 21:17:24,606 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96168.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 21:17:44,176 INFO [train.py:898] (3/4) Epoch 27, batch 1700, loss[loss=0.1848, simple_loss=0.284, pruned_loss=0.04286, over 18354.00 frames. ], tot_loss[loss=0.1573, simple_loss=0.2478, pruned_loss=0.03344, over 3590598.28 frames. ], batch size: 55, lr: 4.16e-03, grad_scale: 8.0 2023-03-09 21:18:06,269 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96203.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 21:18:10,516 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.955e+02 2.460e+02 3.228e+02 3.841e+02 7.050e+02, threshold=6.456e+02, percent-clipped=7.0 2023-03-09 21:18:32,823 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=96226.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:18:42,664 INFO [train.py:898] (3/4) Epoch 27, batch 1750, loss[loss=0.1422, simple_loss=0.2389, pruned_loss=0.0228, over 18587.00 frames. ], tot_loss[loss=0.1575, simple_loss=0.2482, pruned_loss=0.03342, over 3586686.55 frames. ], batch size: 54, lr: 4.16e-03, grad_scale: 8.0 2023-03-09 21:19:27,927 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96273.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:19:32,695 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9672, 5.0353, 5.0902, 4.7795, 4.8550, 4.8073, 5.1787, 5.1735], device='cuda:3'), covar=tensor([0.0067, 0.0069, 0.0054, 0.0109, 0.0057, 0.0156, 0.0066, 0.0086], device='cuda:3'), in_proj_covar=tensor([0.0100, 0.0074, 0.0080, 0.0100, 0.0079, 0.0107, 0.0091, 0.0091], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 21:19:41,575 INFO [train.py:898] (3/4) Epoch 27, batch 1800, loss[loss=0.1406, simple_loss=0.233, pruned_loss=0.0241, over 18258.00 frames. ], tot_loss[loss=0.1574, simple_loss=0.248, pruned_loss=0.03342, over 3579554.68 frames. ], batch size: 47, lr: 4.16e-03, grad_scale: 8.0 2023-03-09 21:20:07,993 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.809e+02 2.546e+02 2.957e+02 3.630e+02 5.615e+02, threshold=5.915e+02, percent-clipped=0.0 2023-03-09 21:20:24,642 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=96321.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:20:28,173 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=96324.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:20:40,272 INFO [train.py:898] (3/4) Epoch 27, batch 1850, loss[loss=0.1521, simple_loss=0.2408, pruned_loss=0.03163, over 18361.00 frames. ], tot_loss[loss=0.1571, simple_loss=0.2477, pruned_loss=0.03324, over 3595071.84 frames. ], batch size: 46, lr: 4.16e-03, grad_scale: 8.0 2023-03-09 21:20:41,713 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96336.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:21:26,236 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9024, 3.0841, 2.8115, 2.9901, 3.8375, 3.7597, 3.2978, 3.0548], device='cuda:3'), covar=tensor([0.0194, 0.0269, 0.0541, 0.0402, 0.0182, 0.0160, 0.0382, 0.0418], device='cuda:3'), in_proj_covar=tensor([0.0145, 0.0146, 0.0168, 0.0166, 0.0141, 0.0128, 0.0161, 0.0164], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 21:21:37,324 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9335, 5.4023, 5.3848, 5.4180, 4.8301, 5.3141, 4.6837, 5.2437], device='cuda:3'), covar=tensor([0.0233, 0.0291, 0.0209, 0.0417, 0.0385, 0.0217, 0.1062, 0.0347], device='cuda:3'), in_proj_covar=tensor([0.0234, 0.0276, 0.0277, 0.0355, 0.0287, 0.0286, 0.0319, 0.0279], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0008, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 21:21:38,003 INFO [train.py:898] (3/4) Epoch 27, batch 1900, loss[loss=0.1598, simple_loss=0.2447, pruned_loss=0.03742, over 18543.00 frames. ], tot_loss[loss=0.1573, simple_loss=0.2478, pruned_loss=0.0334, over 3587169.45 frames. ], batch size: 49, lr: 4.16e-03, grad_scale: 8.0 2023-03-09 21:21:38,396 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=96385.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:21:44,162 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=96390.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:22:04,763 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.806e+02 2.747e+02 3.313e+02 4.135e+02 7.560e+02, threshold=6.625e+02, percent-clipped=3.0 2023-03-09 21:22:36,427 INFO [train.py:898] (3/4) Epoch 27, batch 1950, loss[loss=0.1546, simple_loss=0.2478, pruned_loss=0.03075, over 18424.00 frames. ], tot_loss[loss=0.1577, simple_loss=0.2484, pruned_loss=0.03349, over 3589101.41 frames. ], batch size: 52, lr: 4.16e-03, grad_scale: 8.0 2023-03-09 21:22:37,795 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96436.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:22:55,503 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=96451.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:23:15,690 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96468.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 21:23:35,121 INFO [train.py:898] (3/4) Epoch 27, batch 2000, loss[loss=0.1541, simple_loss=0.2493, pruned_loss=0.02951, over 18111.00 frames. ], tot_loss[loss=0.1578, simple_loss=0.2483, pruned_loss=0.03366, over 3590623.48 frames. ], batch size: 62, lr: 4.15e-03, grad_scale: 8.0 2023-03-09 21:23:56,290 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96503.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:24:00,975 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.734e+02 2.669e+02 3.146e+02 3.749e+02 6.001e+02, threshold=6.292e+02, percent-clipped=0.0 2023-03-09 21:24:11,843 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=96516.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 21:24:17,446 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96521.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:24:33,584 INFO [train.py:898] (3/4) Epoch 27, batch 2050, loss[loss=0.1504, simple_loss=0.2418, pruned_loss=0.02953, over 18569.00 frames. ], tot_loss[loss=0.1573, simple_loss=0.2477, pruned_loss=0.0334, over 3595578.43 frames. ], batch size: 54, lr: 4.15e-03, grad_scale: 8.0 2023-03-09 21:24:51,979 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=96551.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:25:11,002 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3334, 5.3006, 5.6206, 5.6354, 5.3029, 6.1929, 5.8189, 5.3867], device='cuda:3'), covar=tensor([0.1263, 0.0705, 0.0825, 0.0873, 0.1456, 0.0699, 0.0609, 0.1794], device='cuda:3'), in_proj_covar=tensor([0.0375, 0.0304, 0.0330, 0.0333, 0.0342, 0.0445, 0.0298, 0.0436], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 21:25:32,770 INFO [train.py:898] (3/4) Epoch 27, batch 2100, loss[loss=0.1608, simple_loss=0.2602, pruned_loss=0.03066, over 18486.00 frames. ], tot_loss[loss=0.1575, simple_loss=0.248, pruned_loss=0.03353, over 3575501.49 frames. ], batch size: 53, lr: 4.15e-03, grad_scale: 8.0 2023-03-09 21:25:58,870 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.510e+02 2.412e+02 2.976e+02 3.502e+02 7.599e+02, threshold=5.952e+02, percent-clipped=1.0 2023-03-09 21:26:32,722 INFO [train.py:898] (3/4) Epoch 27, batch 2150, loss[loss=0.1526, simple_loss=0.244, pruned_loss=0.03061, over 18386.00 frames. ], tot_loss[loss=0.1572, simple_loss=0.248, pruned_loss=0.03323, over 3588994.81 frames. ], batch size: 50, lr: 4.15e-03, grad_scale: 8.0 2023-03-09 21:26:34,156 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96636.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:27:25,091 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5522, 3.4630, 2.2852, 4.3557, 3.1244, 4.0615, 2.7986, 3.9015], device='cuda:3'), covar=tensor([0.0685, 0.0806, 0.1497, 0.0496, 0.0847, 0.0317, 0.1062, 0.0462], device='cuda:3'), in_proj_covar=tensor([0.0223, 0.0231, 0.0196, 0.0295, 0.0196, 0.0272, 0.0205, 0.0207], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 21:27:26,060 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96680.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:27:30,969 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=96684.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:27:31,869 INFO [train.py:898] (3/4) Epoch 27, batch 2200, loss[loss=0.1666, simple_loss=0.2561, pruned_loss=0.03854, over 18619.00 frames. ], tot_loss[loss=0.1569, simple_loss=0.2476, pruned_loss=0.0331, over 3596802.84 frames. ], batch size: 52, lr: 4.15e-03, grad_scale: 8.0 2023-03-09 21:27:56,875 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 2.105e+02 2.631e+02 3.073e+02 3.700e+02 7.959e+02, threshold=6.147e+02, percent-clipped=1.0 2023-03-09 21:28:29,885 INFO [train.py:898] (3/4) Epoch 27, batch 2250, loss[loss=0.1797, simple_loss=0.2644, pruned_loss=0.0475, over 12492.00 frames. ], tot_loss[loss=0.1578, simple_loss=0.2483, pruned_loss=0.03365, over 3578313.27 frames. ], batch size: 129, lr: 4.15e-03, grad_scale: 8.0 2023-03-09 21:28:31,931 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96736.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:28:43,244 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96746.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:29:18,845 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=96776.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:29:27,622 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=96784.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:29:27,679 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3526, 5.2738, 5.6411, 5.7370, 5.2560, 6.1654, 5.8317, 5.3964], device='cuda:3'), covar=tensor([0.1002, 0.0610, 0.0672, 0.0694, 0.1187, 0.0684, 0.0613, 0.1531], device='cuda:3'), in_proj_covar=tensor([0.0372, 0.0303, 0.0330, 0.0332, 0.0342, 0.0446, 0.0298, 0.0435], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 21:29:28,578 INFO [train.py:898] (3/4) Epoch 27, batch 2300, loss[loss=0.1506, simple_loss=0.2412, pruned_loss=0.02996, over 18265.00 frames. ], tot_loss[loss=0.1575, simple_loss=0.2481, pruned_loss=0.03348, over 3588376.08 frames. ], batch size: 47, lr: 4.15e-03, grad_scale: 8.0 2023-03-09 21:29:53,978 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.778e+02 2.521e+02 2.973e+02 3.966e+02 7.107e+02, threshold=5.946e+02, percent-clipped=4.0 2023-03-09 21:30:03,136 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-09 21:30:11,756 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96821.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:30:12,068 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-09 21:30:27,080 INFO [train.py:898] (3/4) Epoch 27, batch 2350, loss[loss=0.1567, simple_loss=0.2509, pruned_loss=0.03123, over 18358.00 frames. ], tot_loss[loss=0.1579, simple_loss=0.2482, pruned_loss=0.03383, over 3578275.59 frames. ], batch size: 55, lr: 4.15e-03, grad_scale: 8.0 2023-03-09 21:30:29,667 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=96837.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:31:07,625 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=96869.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:31:26,980 INFO [train.py:898] (3/4) Epoch 27, batch 2400, loss[loss=0.1451, simple_loss=0.2371, pruned_loss=0.02652, over 18294.00 frames. ], tot_loss[loss=0.1579, simple_loss=0.2484, pruned_loss=0.03369, over 3582684.64 frames. ], batch size: 49, lr: 4.15e-03, grad_scale: 8.0 2023-03-09 21:31:46,459 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.76 vs. limit=2.0 2023-03-09 21:31:52,347 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.581e+02 2.380e+02 2.700e+02 3.447e+02 6.459e+02, threshold=5.400e+02, percent-clipped=1.0 2023-03-09 21:32:25,336 INFO [train.py:898] (3/4) Epoch 27, batch 2450, loss[loss=0.1596, simple_loss=0.255, pruned_loss=0.03209, over 16972.00 frames. ], tot_loss[loss=0.1579, simple_loss=0.2484, pruned_loss=0.0337, over 3579075.11 frames. ], batch size: 78, lr: 4.14e-03, grad_scale: 8.0 2023-03-09 21:33:19,351 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96980.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:33:22,441 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-09 21:33:24,803 INFO [train.py:898] (3/4) Epoch 27, batch 2500, loss[loss=0.1658, simple_loss=0.2591, pruned_loss=0.03622, over 17897.00 frames. ], tot_loss[loss=0.1579, simple_loss=0.2484, pruned_loss=0.03371, over 3573269.81 frames. ], batch size: 70, lr: 4.14e-03, grad_scale: 8.0 2023-03-09 21:33:41,720 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-09 21:33:42,728 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97000.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:33:50,765 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.816e+02 2.553e+02 2.843e+02 3.348e+02 7.332e+02, threshold=5.685e+02, percent-clipped=3.0 2023-03-09 21:34:15,120 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=97028.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:34:16,528 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5766, 2.4039, 2.6125, 2.5371, 3.0323, 4.7103, 4.6140, 3.1614], device='cuda:3'), covar=tensor([0.2182, 0.2725, 0.3145, 0.2197, 0.2689, 0.0268, 0.0400, 0.1196], device='cuda:3'), in_proj_covar=tensor([0.0330, 0.0365, 0.0412, 0.0293, 0.0400, 0.0267, 0.0305, 0.0274], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:3') 2023-03-09 21:34:23,790 INFO [train.py:898] (3/4) Epoch 27, batch 2550, loss[loss=0.1677, simple_loss=0.2555, pruned_loss=0.03998, over 18453.00 frames. ], tot_loss[loss=0.1581, simple_loss=0.2485, pruned_loss=0.03383, over 3577952.82 frames. ], batch size: 59, lr: 4.14e-03, grad_scale: 8.0 2023-03-09 21:34:36,690 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=97046.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:34:54,316 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97061.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:34:55,472 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8181, 4.8326, 4.9420, 4.6128, 4.6988, 4.6837, 4.9978, 4.9658], device='cuda:3'), covar=tensor([0.0074, 0.0077, 0.0054, 0.0126, 0.0059, 0.0165, 0.0068, 0.0098], device='cuda:3'), in_proj_covar=tensor([0.0100, 0.0075, 0.0080, 0.0100, 0.0080, 0.0109, 0.0091, 0.0092], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 21:35:22,690 INFO [train.py:898] (3/4) Epoch 27, batch 2600, loss[loss=0.1514, simple_loss=0.237, pruned_loss=0.03287, over 18556.00 frames. ], tot_loss[loss=0.1573, simple_loss=0.2479, pruned_loss=0.03339, over 3574880.30 frames. ], batch size: 49, lr: 4.14e-03, grad_scale: 8.0 2023-03-09 21:35:32,736 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=97094.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:35:47,783 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.742e+02 2.535e+02 2.936e+02 3.647e+02 8.103e+02, threshold=5.871e+02, percent-clipped=2.0 2023-03-09 21:36:16,483 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97132.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:36:19,896 INFO [train.py:898] (3/4) Epoch 27, batch 2650, loss[loss=0.1337, simple_loss=0.2156, pruned_loss=0.02588, over 18410.00 frames. ], tot_loss[loss=0.1572, simple_loss=0.2481, pruned_loss=0.03311, over 3574356.96 frames. ], batch size: 42, lr: 4.14e-03, grad_scale: 8.0 2023-03-09 21:37:18,526 INFO [train.py:898] (3/4) Epoch 27, batch 2700, loss[loss=0.1754, simple_loss=0.2628, pruned_loss=0.04399, over 16007.00 frames. ], tot_loss[loss=0.1572, simple_loss=0.2481, pruned_loss=0.03313, over 3583271.80 frames. ], batch size: 94, lr: 4.14e-03, grad_scale: 8.0 2023-03-09 21:37:38,526 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.6132, 3.9653, 2.2822, 3.9205, 5.0270, 2.6342, 3.5061, 3.7765], device='cuda:3'), covar=tensor([0.0251, 0.1333, 0.1874, 0.0732, 0.0114, 0.1329, 0.0883, 0.0823], device='cuda:3'), in_proj_covar=tensor([0.0184, 0.0283, 0.0211, 0.0201, 0.0142, 0.0188, 0.0224, 0.0233], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 21:37:38,921 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-09 21:37:44,983 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.852e+02 2.663e+02 3.178e+02 4.002e+02 6.925e+02, threshold=6.355e+02, percent-clipped=3.0 2023-03-09 21:38:17,458 INFO [train.py:898] (3/4) Epoch 27, batch 2750, loss[loss=0.1334, simple_loss=0.2179, pruned_loss=0.02446, over 17376.00 frames. ], tot_loss[loss=0.1576, simple_loss=0.2484, pruned_loss=0.03337, over 3580395.07 frames. ], batch size: 38, lr: 4.14e-03, grad_scale: 8.0 2023-03-09 21:38:47,057 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97260.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:39:13,902 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-09 21:39:16,418 INFO [train.py:898] (3/4) Epoch 27, batch 2800, loss[loss=0.1797, simple_loss=0.2711, pruned_loss=0.04412, over 18243.00 frames. ], tot_loss[loss=0.1566, simple_loss=0.2474, pruned_loss=0.03292, over 3587324.30 frames. ], batch size: 60, lr: 4.14e-03, grad_scale: 8.0 2023-03-09 21:39:44,377 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.673e+02 2.484e+02 2.831e+02 3.589e+02 1.148e+03, threshold=5.662e+02, percent-clipped=3.0 2023-03-09 21:40:00,217 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97321.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:40:01,887 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=2.01 vs. limit=2.0 2023-03-09 21:40:15,675 INFO [train.py:898] (3/4) Epoch 27, batch 2850, loss[loss=0.1481, simple_loss=0.2466, pruned_loss=0.02478, over 18501.00 frames. ], tot_loss[loss=0.1564, simple_loss=0.2473, pruned_loss=0.03273, over 3595092.28 frames. ], batch size: 47, lr: 4.14e-03, grad_scale: 8.0 2023-03-09 21:40:24,462 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97342.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:40:30,843 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97347.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:40:41,804 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97356.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:41:03,106 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97374.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:41:15,310 INFO [train.py:898] (3/4) Epoch 27, batch 2900, loss[loss=0.1728, simple_loss=0.2685, pruned_loss=0.03852, over 18351.00 frames. ], tot_loss[loss=0.156, simple_loss=0.247, pruned_loss=0.03255, over 3599146.10 frames. ], batch size: 55, lr: 4.14e-03, grad_scale: 8.0 2023-03-09 21:41:38,589 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97403.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 21:41:43,560 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.636e+02 2.502e+02 2.896e+02 3.589e+02 7.058e+02, threshold=5.793e+02, percent-clipped=2.0 2023-03-09 21:41:43,972 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97408.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:42:11,894 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=97432.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:42:14,998 INFO [train.py:898] (3/4) Epoch 27, batch 2950, loss[loss=0.1302, simple_loss=0.2147, pruned_loss=0.02288, over 18511.00 frames. ], tot_loss[loss=0.1559, simple_loss=0.2465, pruned_loss=0.0327, over 3574924.62 frames. ], batch size: 44, lr: 4.13e-03, grad_scale: 4.0 2023-03-09 21:42:15,366 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97435.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:43:06,359 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5324, 2.8695, 2.5705, 2.8194, 3.6681, 3.4990, 3.1461, 2.8629], device='cuda:3'), covar=tensor([0.0191, 0.0277, 0.0542, 0.0410, 0.0175, 0.0175, 0.0408, 0.0409], device='cuda:3'), in_proj_covar=tensor([0.0146, 0.0146, 0.0168, 0.0166, 0.0143, 0.0128, 0.0162, 0.0165], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 21:43:08,292 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=97480.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:43:13,719 INFO [train.py:898] (3/4) Epoch 27, batch 3000, loss[loss=0.1571, simple_loss=0.2419, pruned_loss=0.03608, over 18364.00 frames. ], tot_loss[loss=0.1559, simple_loss=0.2464, pruned_loss=0.0327, over 3589444.34 frames. ], batch size: 46, lr: 4.13e-03, grad_scale: 4.0 2023-03-09 21:43:13,720 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 21:43:26,376 INFO [train.py:932] (3/4) Epoch 27, validation: loss=0.1498, simple_loss=0.2479, pruned_loss=0.02584, over 944034.00 frames. 2023-03-09 21:43:26,376 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 21:43:45,934 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1764, 5.6448, 5.3242, 5.4311, 5.2476, 5.0998, 5.7045, 5.6837], device='cuda:3'), covar=tensor([0.1243, 0.0837, 0.0672, 0.0730, 0.1555, 0.0742, 0.0666, 0.0754], device='cuda:3'), in_proj_covar=tensor([0.0636, 0.0564, 0.0403, 0.0581, 0.0782, 0.0577, 0.0795, 0.0609], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 21:43:54,641 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.734e+02 2.624e+02 3.078e+02 3.801e+02 7.902e+02, threshold=6.156e+02, percent-clipped=2.0 2023-03-09 21:44:25,247 INFO [train.py:898] (3/4) Epoch 27, batch 3050, loss[loss=0.1583, simple_loss=0.2534, pruned_loss=0.03155, over 18222.00 frames. ], tot_loss[loss=0.1569, simple_loss=0.2478, pruned_loss=0.03303, over 3592252.59 frames. ], batch size: 60, lr: 4.13e-03, grad_scale: 4.0 2023-03-09 21:44:41,783 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-09 21:44:49,663 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97555.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:45:07,213 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-09 21:45:24,093 INFO [train.py:898] (3/4) Epoch 27, batch 3100, loss[loss=0.1773, simple_loss=0.2633, pruned_loss=0.04565, over 18485.00 frames. ], tot_loss[loss=0.1567, simple_loss=0.2475, pruned_loss=0.03292, over 3589740.23 frames. ], batch size: 51, lr: 4.13e-03, grad_scale: 4.0 2023-03-09 21:45:52,561 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.712e+02 2.595e+02 3.045e+02 3.637e+02 1.953e+03, threshold=6.090e+02, percent-clipped=1.0 2023-03-09 21:45:58,400 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4904, 5.4658, 5.1164, 5.4009, 5.4115, 4.7118, 5.2859, 5.0432], device='cuda:3'), covar=tensor([0.0434, 0.0441, 0.1227, 0.0793, 0.0607, 0.0474, 0.0459, 0.1101], device='cuda:3'), in_proj_covar=tensor([0.0517, 0.0591, 0.0730, 0.0457, 0.0478, 0.0537, 0.0573, 0.0710], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 21:46:00,711 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97616.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:46:00,872 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97616.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:46:22,624 INFO [train.py:898] (3/4) Epoch 27, batch 3150, loss[loss=0.1363, simple_loss=0.2257, pruned_loss=0.02348, over 18380.00 frames. ], tot_loss[loss=0.1558, simple_loss=0.2464, pruned_loss=0.03258, over 3598351.34 frames. ], batch size: 50, lr: 4.13e-03, grad_scale: 4.0 2023-03-09 21:46:47,747 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=97656.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:47:09,117 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97674.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 21:47:21,031 INFO [train.py:898] (3/4) Epoch 27, batch 3200, loss[loss=0.1831, simple_loss=0.2717, pruned_loss=0.04723, over 18268.00 frames. ], tot_loss[loss=0.1562, simple_loss=0.2468, pruned_loss=0.03279, over 3599174.23 frames. ], batch size: 60, lr: 4.13e-03, grad_scale: 8.0 2023-03-09 21:47:36,660 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97698.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 21:47:43,647 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97703.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:47:44,757 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=97704.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:47:50,084 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.671e+02 2.606e+02 3.062e+02 3.639e+02 1.226e+03, threshold=6.124e+02, percent-clipped=5.0 2023-03-09 21:48:14,881 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97730.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:48:20,215 INFO [train.py:898] (3/4) Epoch 27, batch 3250, loss[loss=0.1498, simple_loss=0.248, pruned_loss=0.02586, over 18624.00 frames. ], tot_loss[loss=0.1563, simple_loss=0.2471, pruned_loss=0.03274, over 3594770.44 frames. ], batch size: 52, lr: 4.13e-03, grad_scale: 8.0 2023-03-09 21:48:20,649 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97735.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 21:48:44,994 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0844, 5.5553, 5.2474, 5.3505, 5.2130, 5.0588, 5.6011, 5.5880], device='cuda:3'), covar=tensor([0.1268, 0.0793, 0.0636, 0.0716, 0.1462, 0.0706, 0.0680, 0.0720], device='cuda:3'), in_proj_covar=tensor([0.0638, 0.0560, 0.0403, 0.0581, 0.0780, 0.0579, 0.0797, 0.0610], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 21:49:19,387 INFO [train.py:898] (3/4) Epoch 27, batch 3300, loss[loss=0.1598, simple_loss=0.2567, pruned_loss=0.03145, over 16110.00 frames. ], tot_loss[loss=0.1562, simple_loss=0.2471, pruned_loss=0.03261, over 3598830.64 frames. ], batch size: 94, lr: 4.13e-03, grad_scale: 8.0 2023-03-09 21:49:48,579 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.710e+02 2.346e+02 2.851e+02 3.327e+02 7.225e+02, threshold=5.702e+02, percent-clipped=2.0 2023-03-09 21:50:18,959 INFO [train.py:898] (3/4) Epoch 27, batch 3350, loss[loss=0.1336, simple_loss=0.2239, pruned_loss=0.02164, over 18505.00 frames. ], tot_loss[loss=0.156, simple_loss=0.2468, pruned_loss=0.03262, over 3595725.19 frames. ], batch size: 44, lr: 4.13e-03, grad_scale: 8.0 2023-03-09 21:51:00,994 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-09 21:51:17,700 INFO [train.py:898] (3/4) Epoch 27, batch 3400, loss[loss=0.1782, simple_loss=0.2761, pruned_loss=0.04015, over 18569.00 frames. ], tot_loss[loss=0.1557, simple_loss=0.2465, pruned_loss=0.03248, over 3599324.92 frames. ], batch size: 54, lr: 4.12e-03, grad_scale: 8.0 2023-03-09 21:51:21,350 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97888.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:51:45,804 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.740e+02 2.527e+02 3.006e+02 3.880e+02 7.423e+02, threshold=6.013e+02, percent-clipped=1.0 2023-03-09 21:51:48,332 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97911.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:51:54,543 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=97916.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:52:00,546 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7896, 3.0555, 2.6791, 3.0165, 3.8579, 3.7095, 3.3574, 3.0306], device='cuda:3'), covar=tensor([0.0162, 0.0250, 0.0547, 0.0362, 0.0157, 0.0156, 0.0397, 0.0359], device='cuda:3'), in_proj_covar=tensor([0.0145, 0.0146, 0.0168, 0.0166, 0.0142, 0.0128, 0.0162, 0.0165], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 21:52:02,860 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7797, 5.1571, 2.6900, 4.9972, 4.9107, 5.1532, 4.9642, 2.5854], device='cuda:3'), covar=tensor([0.0260, 0.0069, 0.0813, 0.0088, 0.0076, 0.0070, 0.0086, 0.1040], device='cuda:3'), in_proj_covar=tensor([0.0094, 0.0084, 0.0099, 0.0100, 0.0091, 0.0080, 0.0087, 0.0099], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 21:52:16,391 INFO [train.py:898] (3/4) Epoch 27, batch 3450, loss[loss=0.1659, simple_loss=0.2532, pruned_loss=0.03925, over 18385.00 frames. ], tot_loss[loss=0.1558, simple_loss=0.2466, pruned_loss=0.03245, over 3599878.99 frames. ], batch size: 50, lr: 4.12e-03, grad_scale: 8.0 2023-03-09 21:52:33,487 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97949.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:52:51,315 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=97964.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:53:15,533 INFO [train.py:898] (3/4) Epoch 27, batch 3500, loss[loss=0.1517, simple_loss=0.2432, pruned_loss=0.03009, over 18389.00 frames. ], tot_loss[loss=0.1561, simple_loss=0.247, pruned_loss=0.03266, over 3587011.50 frames. ], batch size: 52, lr: 4.12e-03, grad_scale: 8.0 2023-03-09 21:53:31,261 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=97998.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:53:42,223 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=98003.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:53:48,591 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.729e+02 2.463e+02 3.009e+02 3.694e+02 8.089e+02, threshold=6.019e+02, percent-clipped=2.0 2023-03-09 21:54:12,150 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=98030.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 21:54:12,188 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=98030.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:54:17,287 INFO [train.py:898] (3/4) Epoch 27, batch 3550, loss[loss=0.1538, simple_loss=0.2503, pruned_loss=0.02859, over 18563.00 frames. ], tot_loss[loss=0.1565, simple_loss=0.2473, pruned_loss=0.0328, over 3589342.12 frames. ], batch size: 54, lr: 4.12e-03, grad_scale: 8.0 2023-03-09 21:54:29,883 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=98046.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:54:35,244 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=98051.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:55:04,933 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=98078.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:55:12,352 INFO [train.py:898] (3/4) Epoch 27, batch 3600, loss[loss=0.1928, simple_loss=0.2756, pruned_loss=0.05505, over 12239.00 frames. ], tot_loss[loss=0.1566, simple_loss=0.2475, pruned_loss=0.0329, over 3580768.94 frames. ], batch size: 129, lr: 4.12e-03, grad_scale: 8.0 2023-03-09 21:55:35,581 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=98106.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:55:38,139 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.847e+02 2.404e+02 2.873e+02 3.414e+02 5.036e+02, threshold=5.745e+02, percent-clipped=0.0 2023-03-09 21:55:44,273 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8659, 2.9294, 2.2460, 3.2648, 2.5348, 2.8283, 2.4730, 2.9300], device='cuda:3'), covar=tensor([0.0665, 0.0834, 0.1244, 0.0630, 0.0782, 0.0327, 0.1002, 0.0487], device='cuda:3'), in_proj_covar=tensor([0.0226, 0.0235, 0.0196, 0.0299, 0.0200, 0.0274, 0.0209, 0.0210], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 21:56:16,250 INFO [train.py:898] (3/4) Epoch 28, batch 0, loss[loss=0.1874, simple_loss=0.2829, pruned_loss=0.046, over 18278.00 frames. ], tot_loss[loss=0.1874, simple_loss=0.2829, pruned_loss=0.046, over 18278.00 frames. ], batch size: 57, lr: 4.04e-03, grad_scale: 8.0 2023-03-09 21:56:16,251 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 21:56:28,179 INFO [train.py:932] (3/4) Epoch 28, validation: loss=0.1499, simple_loss=0.2483, pruned_loss=0.02581, over 944034.00 frames. 2023-03-09 21:56:28,180 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 21:57:24,381 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=98167.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:57:26,203 INFO [train.py:898] (3/4) Epoch 28, batch 50, loss[loss=0.1828, simple_loss=0.2692, pruned_loss=0.04817, over 12752.00 frames. ], tot_loss[loss=0.156, simple_loss=0.246, pruned_loss=0.03294, over 813427.98 frames. ], batch size: 130, lr: 4.04e-03, grad_scale: 8.0 2023-03-09 21:58:05,983 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5466, 4.5428, 4.6119, 4.3420, 4.4382, 4.3901, 4.6664, 4.6880], device='cuda:3'), covar=tensor([0.0101, 0.0098, 0.0084, 0.0150, 0.0081, 0.0209, 0.0102, 0.0117], device='cuda:3'), in_proj_covar=tensor([0.0101, 0.0075, 0.0080, 0.0100, 0.0080, 0.0109, 0.0092, 0.0092], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 21:58:15,011 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.839e+02 2.500e+02 2.819e+02 3.475e+02 6.212e+02, threshold=5.638e+02, percent-clipped=1.0 2023-03-09 21:58:17,663 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=98211.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:58:26,489 INFO [train.py:898] (3/4) Epoch 28, batch 100, loss[loss=0.1501, simple_loss=0.2354, pruned_loss=0.03242, over 18275.00 frames. ], tot_loss[loss=0.1538, simple_loss=0.244, pruned_loss=0.03185, over 1444186.58 frames. ], batch size: 47, lr: 4.04e-03, grad_scale: 8.0 2023-03-09 21:58:56,012 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=98244.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:58:56,249 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1112, 4.0391, 5.2076, 4.7126, 3.4308, 3.2712, 4.5710, 5.4918], device='cuda:3'), covar=tensor([0.0719, 0.1198, 0.0184, 0.0332, 0.0929, 0.1138, 0.0377, 0.0193], device='cuda:3'), in_proj_covar=tensor([0.0154, 0.0286, 0.0173, 0.0188, 0.0199, 0.0199, 0.0202, 0.0213], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 21:59:14,206 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=98259.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 21:59:25,572 INFO [train.py:898] (3/4) Epoch 28, batch 150, loss[loss=0.1611, simple_loss=0.2562, pruned_loss=0.033, over 18617.00 frames. ], tot_loss[loss=0.155, simple_loss=0.2454, pruned_loss=0.03234, over 1912296.87 frames. ], batch size: 52, lr: 4.04e-03, grad_scale: 8.0 2023-03-09 22:00:14,049 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.664e+02 2.460e+02 2.957e+02 3.466e+02 5.415e+02, threshold=5.915e+02, percent-clipped=0.0 2023-03-09 22:00:25,419 INFO [train.py:898] (3/4) Epoch 28, batch 200, loss[loss=0.1604, simple_loss=0.2522, pruned_loss=0.03423, over 18231.00 frames. ], tot_loss[loss=0.1548, simple_loss=0.2455, pruned_loss=0.03206, over 2290552.38 frames. ], batch size: 60, lr: 4.04e-03, grad_scale: 8.0 2023-03-09 22:00:28,426 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.36 vs. limit=5.0 2023-03-09 22:00:37,989 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=98330.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 22:01:24,231 INFO [train.py:898] (3/4) Epoch 28, batch 250, loss[loss=0.1532, simple_loss=0.2488, pruned_loss=0.02879, over 18390.00 frames. ], tot_loss[loss=0.1552, simple_loss=0.246, pruned_loss=0.0322, over 2581775.11 frames. ], batch size: 52, lr: 4.04e-03, grad_scale: 8.0 2023-03-09 22:01:34,750 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=98378.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 22:02:04,209 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0263, 4.1774, 2.6116, 4.1782, 5.3210, 3.2692, 3.6599, 3.7042], device='cuda:3'), covar=tensor([0.0208, 0.1317, 0.1438, 0.0605, 0.0083, 0.0857, 0.0726, 0.0935], device='cuda:3'), in_proj_covar=tensor([0.0183, 0.0280, 0.0209, 0.0201, 0.0143, 0.0185, 0.0222, 0.0230], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 22:02:11,172 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.633e+02 2.515e+02 2.972e+02 3.446e+02 6.764e+02, threshold=5.944e+02, percent-clipped=3.0 2023-03-09 22:02:12,632 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=98410.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:02:20,185 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9677, 3.6835, 5.1320, 3.0336, 4.4784, 2.6064, 3.1711, 1.7723], device='cuda:3'), covar=tensor([0.1235, 0.1020, 0.0166, 0.1003, 0.0536, 0.2786, 0.2671, 0.2478], device='cuda:3'), in_proj_covar=tensor([0.0233, 0.0255, 0.0229, 0.0210, 0.0267, 0.0282, 0.0339, 0.0248], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 22:02:23,115 INFO [train.py:898] (3/4) Epoch 28, batch 300, loss[loss=0.1458, simple_loss=0.2324, pruned_loss=0.02962, over 18491.00 frames. ], tot_loss[loss=0.1555, simple_loss=0.2463, pruned_loss=0.0324, over 2790734.56 frames. ], batch size: 47, lr: 4.04e-03, grad_scale: 8.0 2023-03-09 22:03:14,425 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=98462.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:03:22,741 INFO [train.py:898] (3/4) Epoch 28, batch 350, loss[loss=0.1682, simple_loss=0.2703, pruned_loss=0.03302, over 17147.00 frames. ], tot_loss[loss=0.1554, simple_loss=0.2462, pruned_loss=0.03233, over 2978257.51 frames. ], batch size: 78, lr: 4.04e-03, grad_scale: 8.0 2023-03-09 22:03:25,468 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=98471.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:03:57,705 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1438, 5.1983, 5.2662, 4.9021, 5.0428, 5.0160, 5.2982, 5.3410], device='cuda:3'), covar=tensor([0.0065, 0.0061, 0.0049, 0.0112, 0.0052, 0.0143, 0.0071, 0.0075], device='cuda:3'), in_proj_covar=tensor([0.0101, 0.0075, 0.0080, 0.0101, 0.0080, 0.0109, 0.0092, 0.0092], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 22:03:59,144 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8664, 3.6796, 5.0353, 3.0013, 4.3595, 2.5057, 3.0665, 1.6963], device='cuda:3'), covar=tensor([0.1221, 0.0959, 0.0177, 0.0965, 0.0561, 0.2862, 0.2696, 0.2448], device='cuda:3'), in_proj_covar=tensor([0.0234, 0.0255, 0.0230, 0.0210, 0.0268, 0.0282, 0.0339, 0.0248], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 22:04:09,221 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.910e+02 2.547e+02 2.961e+02 3.445e+02 6.973e+02, threshold=5.923e+02, percent-clipped=1.0 2023-03-09 22:04:21,296 INFO [train.py:898] (3/4) Epoch 28, batch 400, loss[loss=0.1782, simple_loss=0.2648, pruned_loss=0.04575, over 13038.00 frames. ], tot_loss[loss=0.1559, simple_loss=0.2467, pruned_loss=0.03253, over 3107544.14 frames. ], batch size: 129, lr: 4.04e-03, grad_scale: 8.0 2023-03-09 22:04:30,033 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1212, 5.5698, 5.1918, 5.3621, 5.2460, 5.1084, 5.6312, 5.5928], device='cuda:3'), covar=tensor([0.1149, 0.0713, 0.0727, 0.0681, 0.1290, 0.0689, 0.0591, 0.0650], device='cuda:3'), in_proj_covar=tensor([0.0639, 0.0561, 0.0401, 0.0582, 0.0783, 0.0577, 0.0795, 0.0608], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 22:04:47,313 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7146, 4.3275, 4.2659, 3.2673, 3.5896, 3.3198, 2.5289, 2.5329], device='cuda:3'), covar=tensor([0.0232, 0.0152, 0.0097, 0.0341, 0.0345, 0.0256, 0.0749, 0.0854], device='cuda:3'), in_proj_covar=tensor([0.0076, 0.0064, 0.0069, 0.0072, 0.0095, 0.0072, 0.0079, 0.0088], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006], device='cuda:3') 2023-03-09 22:04:49,567 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=98543.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:04:50,552 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=98544.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:05:04,200 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.78 vs. limit=2.0 2023-03-09 22:05:20,090 INFO [train.py:898] (3/4) Epoch 28, batch 450, loss[loss=0.1805, simple_loss=0.2684, pruned_loss=0.04632, over 18286.00 frames. ], tot_loss[loss=0.1559, simple_loss=0.2468, pruned_loss=0.03247, over 3215616.57 frames. ], batch size: 57, lr: 4.04e-03, grad_scale: 8.0 2023-03-09 22:05:46,793 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=98592.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:06:01,940 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=98604.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:06:07,234 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.606e+02 2.466e+02 2.902e+02 3.493e+02 6.837e+02, threshold=5.804e+02, percent-clipped=4.0 2023-03-09 22:06:19,753 INFO [train.py:898] (3/4) Epoch 28, batch 500, loss[loss=0.1531, simple_loss=0.2366, pruned_loss=0.03482, over 18358.00 frames. ], tot_loss[loss=0.1562, simple_loss=0.247, pruned_loss=0.03268, over 3295081.48 frames. ], batch size: 46, lr: 4.03e-03, grad_scale: 8.0 2023-03-09 22:07:18,548 INFO [train.py:898] (3/4) Epoch 28, batch 550, loss[loss=0.1558, simple_loss=0.252, pruned_loss=0.02976, over 18355.00 frames. ], tot_loss[loss=0.1554, simple_loss=0.2464, pruned_loss=0.03226, over 3365001.66 frames. ], batch size: 55, lr: 4.03e-03, grad_scale: 8.0 2023-03-09 22:08:05,610 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.504e+02 2.472e+02 2.832e+02 3.276e+02 6.900e+02, threshold=5.664e+02, percent-clipped=1.0 2023-03-09 22:08:17,680 INFO [train.py:898] (3/4) Epoch 28, batch 600, loss[loss=0.1348, simple_loss=0.22, pruned_loss=0.02486, over 17704.00 frames. ], tot_loss[loss=0.1558, simple_loss=0.2467, pruned_loss=0.03247, over 3398955.57 frames. ], batch size: 39, lr: 4.03e-03, grad_scale: 8.0 2023-03-09 22:08:33,750 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6505, 2.9346, 4.3186, 3.5070, 2.5841, 4.5195, 3.8110, 2.8545], device='cuda:3'), covar=tensor([0.0523, 0.1463, 0.0271, 0.0530, 0.1683, 0.0257, 0.0622, 0.1009], device='cuda:3'), in_proj_covar=tensor([0.0224, 0.0247, 0.0235, 0.0175, 0.0230, 0.0223, 0.0261, 0.0200], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 22:09:08,800 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=98762.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:09:13,070 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=98766.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:09:16,300 INFO [train.py:898] (3/4) Epoch 28, batch 650, loss[loss=0.1406, simple_loss=0.2246, pruned_loss=0.02833, over 18511.00 frames. ], tot_loss[loss=0.1556, simple_loss=0.2462, pruned_loss=0.03252, over 3441323.16 frames. ], batch size: 47, lr: 4.03e-03, grad_scale: 8.0 2023-03-09 22:09:22,006 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.49 vs. limit=2.0 2023-03-09 22:09:53,845 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4040, 3.2927, 2.0915, 4.2188, 2.9164, 3.9144, 2.3741, 3.7513], device='cuda:3'), covar=tensor([0.0701, 0.0927, 0.1600, 0.0513, 0.0903, 0.0312, 0.1260, 0.0443], device='cuda:3'), in_proj_covar=tensor([0.0224, 0.0234, 0.0196, 0.0298, 0.0198, 0.0273, 0.0208, 0.0209], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-09 22:10:03,544 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.904e+02 2.586e+02 3.038e+02 3.660e+02 8.440e+02, threshold=6.077e+02, percent-clipped=4.0 2023-03-09 22:10:04,947 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=98810.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:10:15,437 INFO [train.py:898] (3/4) Epoch 28, batch 700, loss[loss=0.1421, simple_loss=0.2383, pruned_loss=0.0229, over 18571.00 frames. ], tot_loss[loss=0.1559, simple_loss=0.2463, pruned_loss=0.03275, over 3473496.43 frames. ], batch size: 54, lr: 4.03e-03, grad_scale: 8.0 2023-03-09 22:11:14,533 INFO [train.py:898] (3/4) Epoch 28, batch 750, loss[loss=0.1853, simple_loss=0.2771, pruned_loss=0.0468, over 18565.00 frames. ], tot_loss[loss=0.156, simple_loss=0.2466, pruned_loss=0.03264, over 3509326.72 frames. ], batch size: 54, lr: 4.03e-03, grad_scale: 8.0 2023-03-09 22:11:28,384 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.5876, 3.9703, 2.3918, 3.8621, 4.8768, 2.5771, 3.4149, 3.8077], device='cuda:3'), covar=tensor([0.0231, 0.1082, 0.1698, 0.0675, 0.0139, 0.1250, 0.0847, 0.0760], device='cuda:3'), in_proj_covar=tensor([0.0183, 0.0279, 0.0208, 0.0200, 0.0142, 0.0184, 0.0221, 0.0230], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 22:11:39,346 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-09 22:11:40,045 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=98890.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:11:46,246 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=98895.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:11:50,709 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=98899.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:12:02,001 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.555e+02 2.589e+02 3.008e+02 3.946e+02 6.579e+02, threshold=6.017e+02, percent-clipped=1.0 2023-03-09 22:12:13,962 INFO [train.py:898] (3/4) Epoch 28, batch 800, loss[loss=0.1393, simple_loss=0.2213, pruned_loss=0.02868, over 18249.00 frames. ], tot_loss[loss=0.156, simple_loss=0.2468, pruned_loss=0.03264, over 3527046.09 frames. ], batch size: 45, lr: 4.03e-03, grad_scale: 8.0 2023-03-09 22:12:45,997 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3829, 5.3945, 5.0526, 5.3257, 5.3426, 4.7875, 5.2271, 4.9793], device='cuda:3'), covar=tensor([0.0491, 0.0457, 0.1264, 0.0769, 0.0548, 0.0387, 0.0447, 0.1176], device='cuda:3'), in_proj_covar=tensor([0.0520, 0.0595, 0.0734, 0.0459, 0.0477, 0.0541, 0.0578, 0.0708], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 22:12:52,312 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=98951.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:12:58,160 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=98956.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:13:12,399 INFO [train.py:898] (3/4) Epoch 28, batch 850, loss[loss=0.1423, simple_loss=0.2274, pruned_loss=0.02861, over 18157.00 frames. ], tot_loss[loss=0.1555, simple_loss=0.2462, pruned_loss=0.03237, over 3553063.35 frames. ], batch size: 44, lr: 4.03e-03, grad_scale: 8.0 2023-03-09 22:13:15,548 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=98971.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:13:46,321 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-09 22:14:00,122 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.700e+02 2.503e+02 2.990e+02 3.802e+02 7.789e+02, threshold=5.979e+02, percent-clipped=2.0 2023-03-09 22:14:11,479 INFO [train.py:898] (3/4) Epoch 28, batch 900, loss[loss=0.1437, simple_loss=0.2275, pruned_loss=0.02998, over 18563.00 frames. ], tot_loss[loss=0.1557, simple_loss=0.2462, pruned_loss=0.03261, over 3558921.20 frames. ], batch size: 45, lr: 4.03e-03, grad_scale: 8.0 2023-03-09 22:14:11,997 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9214, 3.7997, 5.3011, 3.0252, 4.5783, 2.7573, 3.1939, 1.9511], device='cuda:3'), covar=tensor([0.1221, 0.0948, 0.0128, 0.1017, 0.0491, 0.2752, 0.2802, 0.2279], device='cuda:3'), in_proj_covar=tensor([0.0233, 0.0255, 0.0231, 0.0210, 0.0267, 0.0281, 0.0338, 0.0247], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 22:14:27,836 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99032.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:14:48,774 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3875, 5.9377, 5.5447, 5.7156, 5.4917, 5.3267, 5.9684, 5.9589], device='cuda:3'), covar=tensor([0.1076, 0.0755, 0.0495, 0.0664, 0.1447, 0.0714, 0.0600, 0.0693], device='cuda:3'), in_proj_covar=tensor([0.0645, 0.0564, 0.0406, 0.0588, 0.0792, 0.0585, 0.0804, 0.0617], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 22:14:49,969 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99051.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:14:52,828 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9602, 3.6301, 5.1282, 2.9561, 4.4279, 2.6995, 3.1163, 1.9045], device='cuda:3'), covar=tensor([0.1257, 0.1042, 0.0187, 0.1069, 0.0550, 0.2702, 0.2899, 0.2404], device='cuda:3'), in_proj_covar=tensor([0.0234, 0.0256, 0.0232, 0.0211, 0.0269, 0.0283, 0.0340, 0.0248], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 22:15:07,207 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99066.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:15:10,382 INFO [train.py:898] (3/4) Epoch 28, batch 950, loss[loss=0.1446, simple_loss=0.2323, pruned_loss=0.02842, over 18352.00 frames. ], tot_loss[loss=0.1559, simple_loss=0.2464, pruned_loss=0.0327, over 3569712.16 frames. ], batch size: 46, lr: 4.02e-03, grad_scale: 8.0 2023-03-09 22:15:24,617 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.81 vs. limit=2.0 2023-03-09 22:15:57,908 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.971e+02 2.497e+02 3.033e+02 3.631e+02 7.579e+02, threshold=6.066e+02, percent-clipped=1.0 2023-03-09 22:16:01,744 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99112.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:16:03,880 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=99114.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:16:09,569 INFO [train.py:898] (3/4) Epoch 28, batch 1000, loss[loss=0.1538, simple_loss=0.243, pruned_loss=0.03233, over 18246.00 frames. ], tot_loss[loss=0.1556, simple_loss=0.2461, pruned_loss=0.03256, over 3579367.04 frames. ], batch size: 47, lr: 4.02e-03, grad_scale: 8.0 2023-03-09 22:17:08,456 INFO [train.py:898] (3/4) Epoch 28, batch 1050, loss[loss=0.1315, simple_loss=0.2177, pruned_loss=0.02264, over 18435.00 frames. ], tot_loss[loss=0.1554, simple_loss=0.2459, pruned_loss=0.03247, over 3576881.78 frames. ], batch size: 43, lr: 4.02e-03, grad_scale: 8.0 2023-03-09 22:17:43,478 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99199.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:17:55,237 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.556e+02 2.410e+02 2.895e+02 3.644e+02 7.761e+02, threshold=5.789e+02, percent-clipped=3.0 2023-03-09 22:18:06,854 INFO [train.py:898] (3/4) Epoch 28, batch 1100, loss[loss=0.146, simple_loss=0.2361, pruned_loss=0.02796, over 18403.00 frames. ], tot_loss[loss=0.1555, simple_loss=0.246, pruned_loss=0.03247, over 3575175.57 frames. ], batch size: 48, lr: 4.02e-03, grad_scale: 8.0 2023-03-09 22:18:17,537 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7984, 3.6144, 4.8413, 4.2223, 3.3090, 2.8627, 4.2667, 5.1347], device='cuda:3'), covar=tensor([0.0828, 0.1416, 0.0263, 0.0461, 0.0990, 0.1421, 0.0464, 0.0235], device='cuda:3'), in_proj_covar=tensor([0.0154, 0.0284, 0.0173, 0.0188, 0.0198, 0.0198, 0.0201, 0.0212], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 22:18:18,529 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99229.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:18:26,114 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.62 vs. limit=2.0 2023-03-09 22:18:39,075 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99246.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:18:40,212 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=99247.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:18:44,755 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99251.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:19:05,472 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99268.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:19:06,230 INFO [train.py:898] (3/4) Epoch 28, batch 1150, loss[loss=0.1504, simple_loss=0.2492, pruned_loss=0.02583, over 18500.00 frames. ], tot_loss[loss=0.1558, simple_loss=0.2468, pruned_loss=0.03237, over 3571366.92 frames. ], batch size: 53, lr: 4.02e-03, grad_scale: 8.0 2023-03-09 22:19:06,796 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.08 vs. limit=5.0 2023-03-09 22:19:31,911 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99290.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:19:53,828 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.536e+02 2.497e+02 2.883e+02 3.542e+02 9.328e+02, threshold=5.765e+02, percent-clipped=1.0 2023-03-09 22:20:05,839 INFO [train.py:898] (3/4) Epoch 28, batch 1200, loss[loss=0.1705, simple_loss=0.2683, pruned_loss=0.03633, over 18106.00 frames. ], tot_loss[loss=0.1546, simple_loss=0.2456, pruned_loss=0.03185, over 3579911.14 frames. ], batch size: 62, lr: 4.02e-03, grad_scale: 8.0 2023-03-09 22:20:12,076 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99324.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:20:15,195 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99327.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:20:15,947 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.68 vs. limit=2.0 2023-03-09 22:20:17,712 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99329.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:20:20,333 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.98 vs. limit=2.0 2023-03-09 22:20:23,533 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.12 vs. limit=5.0 2023-03-09 22:20:56,906 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8621, 4.4740, 4.4101, 3.3858, 3.7038, 3.5182, 2.6849, 2.5517], device='cuda:3'), covar=tensor([0.0224, 0.0154, 0.0111, 0.0335, 0.0371, 0.0238, 0.0708, 0.0877], device='cuda:3'), in_proj_covar=tensor([0.0076, 0.0064, 0.0070, 0.0073, 0.0095, 0.0072, 0.0080, 0.0089], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006], device='cuda:3') 2023-03-09 22:21:02,142 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7137, 4.0862, 2.5325, 3.9957, 5.1596, 2.7112, 3.6842, 3.9677], device='cuda:3'), covar=tensor([0.0261, 0.1189, 0.1662, 0.0782, 0.0129, 0.1231, 0.0806, 0.0780], device='cuda:3'), in_proj_covar=tensor([0.0184, 0.0281, 0.0209, 0.0201, 0.0143, 0.0185, 0.0222, 0.0231], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003, 0.0003], device='cuda:3') 2023-03-09 22:21:04,011 INFO [train.py:898] (3/4) Epoch 28, batch 1250, loss[loss=0.1593, simple_loss=0.2547, pruned_loss=0.03196, over 15574.00 frames. ], tot_loss[loss=0.1543, simple_loss=0.2451, pruned_loss=0.03178, over 3593687.95 frames. ], batch size: 94, lr: 4.02e-03, grad_scale: 8.0 2023-03-09 22:21:17,747 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1788, 3.4356, 3.3068, 2.9613, 3.0488, 2.9505, 2.5046, 2.3453], device='cuda:3'), covar=tensor([0.0254, 0.0146, 0.0154, 0.0310, 0.0361, 0.0245, 0.0608, 0.0748], device='cuda:3'), in_proj_covar=tensor([0.0076, 0.0064, 0.0070, 0.0073, 0.0095, 0.0072, 0.0080, 0.0089], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006], device='cuda:3') 2023-03-09 22:21:23,321 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99385.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:21:49,413 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99407.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:21:51,417 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.740e+02 2.716e+02 3.087e+02 3.833e+02 7.403e+02, threshold=6.174e+02, percent-clipped=2.0 2023-03-09 22:22:03,380 INFO [train.py:898] (3/4) Epoch 28, batch 1300, loss[loss=0.1505, simple_loss=0.2377, pruned_loss=0.03169, over 18504.00 frames. ], tot_loss[loss=0.1555, simple_loss=0.246, pruned_loss=0.03246, over 3569023.92 frames. ], batch size: 47, lr: 4.02e-03, grad_scale: 16.0 2023-03-09 22:22:04,970 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99420.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:22:20,535 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2043, 5.3024, 5.5284, 5.5568, 5.0947, 6.0463, 5.7001, 5.2942], device='cuda:3'), covar=tensor([0.1143, 0.0619, 0.0789, 0.0824, 0.1396, 0.0685, 0.0732, 0.1689], device='cuda:3'), in_proj_covar=tensor([0.0376, 0.0307, 0.0333, 0.0336, 0.0343, 0.0446, 0.0299, 0.0437], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 22:23:01,819 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99468.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 22:23:02,624 INFO [train.py:898] (3/4) Epoch 28, batch 1350, loss[loss=0.1595, simple_loss=0.2526, pruned_loss=0.03316, over 18610.00 frames. ], tot_loss[loss=0.1555, simple_loss=0.2462, pruned_loss=0.03237, over 3557901.99 frames. ], batch size: 52, lr: 4.02e-03, grad_scale: 16.0 2023-03-09 22:23:17,572 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99481.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:23:42,856 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-09 22:23:49,955 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.936e+02 2.551e+02 3.034e+02 3.736e+02 7.669e+02, threshold=6.068e+02, percent-clipped=1.0 2023-03-09 22:23:58,864 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.56 vs. limit=2.0 2023-03-09 22:24:01,496 INFO [train.py:898] (3/4) Epoch 28, batch 1400, loss[loss=0.1659, simple_loss=0.2614, pruned_loss=0.03515, over 18268.00 frames. ], tot_loss[loss=0.1555, simple_loss=0.246, pruned_loss=0.03245, over 3565801.02 frames. ], batch size: 60, lr: 4.02e-03, grad_scale: 16.0 2023-03-09 22:24:13,812 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99529.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 22:24:16,608 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5063, 2.8276, 2.4016, 2.7890, 3.5867, 3.4630, 3.0707, 2.8328], device='cuda:3'), covar=tensor([0.0168, 0.0311, 0.0598, 0.0394, 0.0182, 0.0178, 0.0384, 0.0407], device='cuda:3'), in_proj_covar=tensor([0.0150, 0.0149, 0.0170, 0.0169, 0.0145, 0.0132, 0.0164, 0.0167], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0003, 0.0003, 0.0004], device='cuda:3') 2023-03-09 22:24:33,161 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99546.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:24:39,352 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99551.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:24:59,120 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99568.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:24:59,945 INFO [train.py:898] (3/4) Epoch 28, batch 1450, loss[loss=0.1736, simple_loss=0.2669, pruned_loss=0.04018, over 18455.00 frames. ], tot_loss[loss=0.1562, simple_loss=0.2468, pruned_loss=0.03281, over 3566255.93 frames. ], batch size: 59, lr: 4.01e-03, grad_scale: 16.0 2023-03-09 22:25:19,503 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99585.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:25:24,203 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99589.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:25:29,616 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=99594.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:25:35,267 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=99599.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:25:48,002 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.800e+02 2.454e+02 2.982e+02 3.510e+02 7.061e+02, threshold=5.964e+02, percent-clipped=2.0 2023-03-09 22:25:59,259 INFO [train.py:898] (3/4) Epoch 28, batch 1500, loss[loss=0.1279, simple_loss=0.2122, pruned_loss=0.02177, over 18401.00 frames. ], tot_loss[loss=0.1556, simple_loss=0.2462, pruned_loss=0.03249, over 3572219.15 frames. ], batch size: 42, lr: 4.01e-03, grad_scale: 16.0 2023-03-09 22:26:01,950 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4659, 5.9437, 5.5583, 5.7794, 5.6104, 5.4618, 6.0365, 5.9764], device='cuda:3'), covar=tensor([0.1119, 0.0797, 0.0478, 0.0707, 0.1353, 0.0671, 0.0581, 0.0709], device='cuda:3'), in_proj_covar=tensor([0.0638, 0.0557, 0.0402, 0.0582, 0.0782, 0.0579, 0.0794, 0.0610], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 22:26:05,321 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99624.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:26:07,168 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7066, 3.7033, 3.5430, 3.0264, 3.4337, 2.5626, 2.4524, 3.7238], device='cuda:3'), covar=tensor([0.0079, 0.0110, 0.0098, 0.0182, 0.0104, 0.0283, 0.0369, 0.0069], device='cuda:3'), in_proj_covar=tensor([0.0160, 0.0179, 0.0149, 0.0199, 0.0158, 0.0191, 0.0195, 0.0136], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-09 22:26:09,260 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99627.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:26:11,644 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99629.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:26:35,866 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99650.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:26:58,072 INFO [train.py:898] (3/4) Epoch 28, batch 1550, loss[loss=0.144, simple_loss=0.2351, pruned_loss=0.02645, over 18386.00 frames. ], tot_loss[loss=0.1563, simple_loss=0.2472, pruned_loss=0.03271, over 3575109.79 frames. ], batch size: 50, lr: 4.01e-03, grad_scale: 16.0 2023-03-09 22:27:05,106 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=99675.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:27:11,348 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99680.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:27:11,459 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99680.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:27:43,156 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99707.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:27:45,107 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.790e+02 2.548e+02 3.050e+02 3.629e+02 6.290e+02, threshold=6.100e+02, percent-clipped=1.0 2023-03-09 22:27:57,322 INFO [train.py:898] (3/4) Epoch 28, batch 1600, loss[loss=0.1597, simple_loss=0.256, pruned_loss=0.03173, over 17787.00 frames. ], tot_loss[loss=0.1561, simple_loss=0.2473, pruned_loss=0.03244, over 3583556.48 frames. ], batch size: 70, lr: 4.01e-03, grad_scale: 16.0 2023-03-09 22:28:06,635 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7692, 5.2148, 2.5059, 5.0265, 4.9131, 5.2405, 4.9934, 2.5524], device='cuda:3'), covar=tensor([0.0277, 0.0072, 0.0896, 0.0084, 0.0084, 0.0061, 0.0097, 0.1125], device='cuda:3'), in_proj_covar=tensor([0.0095, 0.0084, 0.0099, 0.0100, 0.0091, 0.0081, 0.0088, 0.0100], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 22:28:15,979 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9752, 5.7005, 5.3156, 5.6013, 5.1027, 5.5037, 5.8937, 5.7135], device='cuda:3'), covar=tensor([0.2404, 0.1454, 0.0828, 0.1230, 0.2706, 0.1128, 0.0960, 0.1320], device='cuda:3'), in_proj_covar=tensor([0.0639, 0.0559, 0.0404, 0.0584, 0.0785, 0.0582, 0.0800, 0.0612], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 22:28:22,914 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99741.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:28:39,283 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=99755.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:28:48,095 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8624, 3.6052, 5.1435, 2.8493, 4.4948, 2.5370, 3.0482, 1.7575], device='cuda:3'), covar=tensor([0.1275, 0.1035, 0.0176, 0.1081, 0.0456, 0.2820, 0.2725, 0.2455], device='cuda:3'), in_proj_covar=tensor([0.0231, 0.0254, 0.0229, 0.0209, 0.0266, 0.0280, 0.0336, 0.0246], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 22:28:56,208 INFO [train.py:898] (3/4) Epoch 28, batch 1650, loss[loss=0.1615, simple_loss=0.2508, pruned_loss=0.03613, over 18482.00 frames. ], tot_loss[loss=0.1562, simple_loss=0.2475, pruned_loss=0.03247, over 3578331.04 frames. ], batch size: 51, lr: 4.01e-03, grad_scale: 16.0 2023-03-09 22:29:04,519 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99776.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:29:37,924 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-09 22:29:42,826 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.627e+02 2.331e+02 2.829e+02 3.453e+02 7.033e+02, threshold=5.658e+02, percent-clipped=1.0 2023-03-09 22:29:49,513 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7785, 4.8242, 4.8681, 4.5891, 4.7151, 4.6919, 4.9281, 4.9438], device='cuda:3'), covar=tensor([0.0083, 0.0069, 0.0065, 0.0127, 0.0061, 0.0161, 0.0072, 0.0100], device='cuda:3'), in_proj_covar=tensor([0.0100, 0.0074, 0.0079, 0.0099, 0.0079, 0.0107, 0.0090, 0.0090], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 22:29:55,640 INFO [train.py:898] (3/4) Epoch 28, batch 1700, loss[loss=0.1369, simple_loss=0.2391, pruned_loss=0.01733, over 18367.00 frames. ], tot_loss[loss=0.1563, simple_loss=0.2476, pruned_loss=0.03248, over 3587948.89 frames. ], batch size: 55, lr: 4.01e-03, grad_scale: 16.0 2023-03-09 22:30:01,649 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99824.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 22:30:09,907 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2652, 3.9718, 5.4018, 3.3048, 4.7462, 2.9486, 3.2752, 2.1375], device='cuda:3'), covar=tensor([0.1072, 0.0907, 0.0163, 0.0907, 0.0454, 0.2641, 0.2747, 0.2232], device='cuda:3'), in_proj_covar=tensor([0.0232, 0.0256, 0.0230, 0.0210, 0.0267, 0.0281, 0.0338, 0.0248], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 22:30:14,478 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8347, 3.9020, 3.7075, 3.3820, 3.5774, 3.0290, 3.0858, 3.8753], device='cuda:3'), covar=tensor([0.0075, 0.0092, 0.0086, 0.0142, 0.0114, 0.0211, 0.0205, 0.0076], device='cuda:3'), in_proj_covar=tensor([0.0160, 0.0179, 0.0149, 0.0198, 0.0158, 0.0192, 0.0194, 0.0136], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-09 22:30:54,422 INFO [train.py:898] (3/4) Epoch 28, batch 1750, loss[loss=0.1464, simple_loss=0.2326, pruned_loss=0.03005, over 18216.00 frames. ], tot_loss[loss=0.1565, simple_loss=0.2479, pruned_loss=0.0326, over 3591403.19 frames. ], batch size: 45, lr: 4.01e-03, grad_scale: 16.0 2023-03-09 22:31:12,655 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99885.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:31:39,965 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.785e+02 2.621e+02 3.029e+02 3.741e+02 5.430e+02, threshold=6.058e+02, percent-clipped=0.0 2023-03-09 22:31:51,917 INFO [train.py:898] (3/4) Epoch 28, batch 1800, loss[loss=0.1532, simple_loss=0.2457, pruned_loss=0.03038, over 18490.00 frames. ], tot_loss[loss=0.1563, simple_loss=0.2475, pruned_loss=0.03256, over 3595880.24 frames. ], batch size: 53, lr: 4.01e-03, grad_scale: 16.0 2023-03-09 22:31:58,607 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99924.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:31:58,729 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99924.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:32:03,907 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-09 22:32:08,638 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=99933.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:32:08,919 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7814, 4.1159, 2.4929, 4.0394, 5.1490, 2.5668, 3.8913, 4.0683], device='cuda:3'), covar=tensor([0.0213, 0.1197, 0.1665, 0.0680, 0.0095, 0.1336, 0.0685, 0.0702], device='cuda:3'), in_proj_covar=tensor([0.0186, 0.0283, 0.0211, 0.0201, 0.0144, 0.0186, 0.0224, 0.0233], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 22:32:22,796 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99945.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:32:38,586 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9474, 3.8330, 5.2511, 2.9764, 4.6189, 2.6944, 3.0943, 1.7937], device='cuda:3'), covar=tensor([0.1241, 0.0952, 0.0172, 0.1050, 0.0445, 0.2800, 0.3082, 0.2402], device='cuda:3'), in_proj_covar=tensor([0.0233, 0.0256, 0.0230, 0.0210, 0.0267, 0.0282, 0.0339, 0.0248], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 22:32:51,461 INFO [train.py:898] (3/4) Epoch 28, batch 1850, loss[loss=0.1767, simple_loss=0.2675, pruned_loss=0.043, over 18238.00 frames. ], tot_loss[loss=0.1569, simple_loss=0.2483, pruned_loss=0.03277, over 3595904.77 frames. ], batch size: 60, lr: 4.01e-03, grad_scale: 16.0 2023-03-09 22:32:56,002 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=99972.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:33:05,234 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99980.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:33:19,562 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6928, 3.5112, 4.7612, 4.1824, 3.0805, 2.7890, 4.1166, 5.0616], device='cuda:3'), covar=tensor([0.0931, 0.1604, 0.0301, 0.0523, 0.1201, 0.1479, 0.0572, 0.0333], device='cuda:3'), in_proj_covar=tensor([0.0156, 0.0287, 0.0175, 0.0189, 0.0201, 0.0201, 0.0204, 0.0216], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 22:33:41,437 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=100007.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 22:33:43,361 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.762e+02 2.401e+02 2.680e+02 3.285e+02 5.327e+02, threshold=5.360e+02, percent-clipped=0.0 2023-03-09 22:33:55,962 INFO [train.py:898] (3/4) Epoch 28, batch 1900, loss[loss=0.1469, simple_loss=0.239, pruned_loss=0.02738, over 18436.00 frames. ], tot_loss[loss=0.1565, simple_loss=0.2477, pruned_loss=0.03265, over 3602019.10 frames. ], batch size: 43, lr: 4.01e-03, grad_scale: 16.0 2023-03-09 22:34:06,780 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=100028.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:34:08,015 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.7219, 6.2475, 5.7985, 6.0980, 5.8844, 5.6645, 6.3237, 6.2478], device='cuda:3'), covar=tensor([0.1018, 0.0731, 0.0378, 0.0590, 0.1312, 0.0680, 0.0498, 0.0706], device='cuda:3'), in_proj_covar=tensor([0.0634, 0.0556, 0.0397, 0.0578, 0.0777, 0.0576, 0.0792, 0.0608], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 22:34:15,689 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=100036.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:34:23,145 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0239, 5.4869, 5.4763, 5.5349, 4.9151, 5.3563, 4.7544, 5.3786], device='cuda:3'), covar=tensor([0.0254, 0.0308, 0.0211, 0.0446, 0.0419, 0.0262, 0.1181, 0.0338], device='cuda:3'), in_proj_covar=tensor([0.0235, 0.0276, 0.0278, 0.0356, 0.0288, 0.0287, 0.0318, 0.0280], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0008, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 22:34:31,502 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-09 22:34:53,702 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=100068.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 22:34:54,393 INFO [train.py:898] (3/4) Epoch 28, batch 1950, loss[loss=0.1738, simple_loss=0.2695, pruned_loss=0.03902, over 18286.00 frames. ], tot_loss[loss=0.1554, simple_loss=0.2464, pruned_loss=0.03221, over 3602165.80 frames. ], batch size: 57, lr: 4.00e-03, grad_scale: 16.0 2023-03-09 22:35:03,903 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100076.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:35:41,391 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.807e+02 2.428e+02 2.748e+02 3.405e+02 9.642e+02, threshold=5.496e+02, percent-clipped=3.0 2023-03-09 22:35:53,072 INFO [train.py:898] (3/4) Epoch 28, batch 2000, loss[loss=0.16, simple_loss=0.2405, pruned_loss=0.03975, over 18351.00 frames. ], tot_loss[loss=0.1556, simple_loss=0.2464, pruned_loss=0.03238, over 3606880.76 frames. ], batch size: 46, lr: 4.00e-03, grad_scale: 16.0 2023-03-09 22:35:59,785 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=100124.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:35:59,855 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100124.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 22:36:24,744 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-09 22:36:53,003 INFO [train.py:898] (3/4) Epoch 28, batch 2050, loss[loss=0.1593, simple_loss=0.2511, pruned_loss=0.03373, over 18284.00 frames. ], tot_loss[loss=0.1543, simple_loss=0.245, pruned_loss=0.03177, over 3596801.53 frames. ], batch size: 54, lr: 4.00e-03, grad_scale: 16.0 2023-03-09 22:36:56,634 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=100172.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 22:37:30,849 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.70 vs. limit=2.0 2023-03-09 22:37:39,776 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.832e+02 2.490e+02 2.932e+02 3.409e+02 5.416e+02, threshold=5.865e+02, percent-clipped=0.0 2023-03-09 22:37:51,115 INFO [train.py:898] (3/4) Epoch 28, batch 2100, loss[loss=0.1536, simple_loss=0.2442, pruned_loss=0.03153, over 18280.00 frames. ], tot_loss[loss=0.1547, simple_loss=0.2457, pruned_loss=0.03189, over 3608269.42 frames. ], batch size: 49, lr: 4.00e-03, grad_scale: 16.0 2023-03-09 22:37:57,777 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100224.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:38:14,618 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=100238.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:38:22,568 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100245.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:38:50,261 INFO [train.py:898] (3/4) Epoch 28, batch 2150, loss[loss=0.155, simple_loss=0.2483, pruned_loss=0.03087, over 18497.00 frames. ], tot_loss[loss=0.1547, simple_loss=0.2456, pruned_loss=0.03185, over 3607032.58 frames. ], batch size: 53, lr: 4.00e-03, grad_scale: 16.0 2023-03-09 22:38:54,621 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=100272.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:39:19,977 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=100293.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:39:27,024 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=100299.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:39:38,059 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.888e+02 2.494e+02 2.925e+02 3.420e+02 7.055e+02, threshold=5.850e+02, percent-clipped=2.0 2023-03-09 22:39:50,186 INFO [train.py:898] (3/4) Epoch 28, batch 2200, loss[loss=0.1516, simple_loss=0.2509, pruned_loss=0.02611, over 18481.00 frames. ], tot_loss[loss=0.1539, simple_loss=0.2449, pruned_loss=0.0314, over 3605775.01 frames. ], batch size: 53, lr: 4.00e-03, grad_scale: 16.0 2023-03-09 22:40:11,111 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100336.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:40:43,176 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=100363.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 22:40:49,836 INFO [train.py:898] (3/4) Epoch 28, batch 2250, loss[loss=0.1489, simple_loss=0.2298, pruned_loss=0.03398, over 18375.00 frames. ], tot_loss[loss=0.1542, simple_loss=0.245, pruned_loss=0.03171, over 3589408.89 frames. ], batch size: 46, lr: 4.00e-03, grad_scale: 16.0 2023-03-09 22:41:07,430 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=100384.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:41:29,604 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=100402.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:41:30,140 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-09 22:41:37,425 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.960e+02 2.460e+02 2.795e+02 3.389e+02 7.041e+02, threshold=5.590e+02, percent-clipped=1.0 2023-03-09 22:41:39,991 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1971, 5.7114, 5.2826, 5.5171, 5.3450, 5.1516, 5.7727, 5.7159], device='cuda:3'), covar=tensor([0.1084, 0.0728, 0.0637, 0.0670, 0.1287, 0.0745, 0.0542, 0.0657], device='cuda:3'), in_proj_covar=tensor([0.0633, 0.0559, 0.0401, 0.0580, 0.0778, 0.0579, 0.0792, 0.0609], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 22:41:49,304 INFO [train.py:898] (3/4) Epoch 28, batch 2300, loss[loss=0.1346, simple_loss=0.2159, pruned_loss=0.02662, over 18506.00 frames. ], tot_loss[loss=0.1546, simple_loss=0.2455, pruned_loss=0.03187, over 3576277.67 frames. ], batch size: 44, lr: 4.00e-03, grad_scale: 16.0 2023-03-09 22:42:41,877 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=100463.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 22:42:48,934 INFO [train.py:898] (3/4) Epoch 28, batch 2350, loss[loss=0.1599, simple_loss=0.2581, pruned_loss=0.0309, over 18320.00 frames. ], tot_loss[loss=0.1549, simple_loss=0.2458, pruned_loss=0.03203, over 3575849.06 frames. ], batch size: 54, lr: 4.00e-03, grad_scale: 16.0 2023-03-09 22:43:36,870 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.732e+02 2.362e+02 2.730e+02 3.461e+02 8.407e+02, threshold=5.459e+02, percent-clipped=2.0 2023-03-09 22:43:48,279 INFO [train.py:898] (3/4) Epoch 28, batch 2400, loss[loss=0.1764, simple_loss=0.2761, pruned_loss=0.03831, over 17987.00 frames. ], tot_loss[loss=0.1558, simple_loss=0.2468, pruned_loss=0.03238, over 3578258.54 frames. ], batch size: 65, lr: 4.00e-03, grad_scale: 16.0 2023-03-09 22:44:46,656 INFO [train.py:898] (3/4) Epoch 28, batch 2450, loss[loss=0.1554, simple_loss=0.2304, pruned_loss=0.0402, over 18423.00 frames. ], tot_loss[loss=0.156, simple_loss=0.2466, pruned_loss=0.0327, over 3573720.74 frames. ], batch size: 43, lr: 3.99e-03, grad_scale: 16.0 2023-03-09 22:44:50,896 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-09 22:45:16,714 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=100594.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:45:34,575 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.748e+02 2.510e+02 2.947e+02 3.432e+02 5.400e+02, threshold=5.894e+02, percent-clipped=0.0 2023-03-09 22:45:45,749 INFO [train.py:898] (3/4) Epoch 28, batch 2500, loss[loss=0.1698, simple_loss=0.2559, pruned_loss=0.04187, over 18624.00 frames. ], tot_loss[loss=0.156, simple_loss=0.2466, pruned_loss=0.03268, over 3568839.05 frames. ], batch size: 52, lr: 3.99e-03, grad_scale: 16.0 2023-03-09 22:46:39,059 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100663.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 22:46:45,732 INFO [train.py:898] (3/4) Epoch 28, batch 2550, loss[loss=0.1443, simple_loss=0.2254, pruned_loss=0.03163, over 18415.00 frames. ], tot_loss[loss=0.1556, simple_loss=0.2463, pruned_loss=0.03242, over 3570954.49 frames. ], batch size: 42, lr: 3.99e-03, grad_scale: 16.0 2023-03-09 22:47:17,625 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.1472, 3.8554, 5.3210, 3.1630, 4.6940, 2.7832, 3.3284, 1.9351], device='cuda:3'), covar=tensor([0.1110, 0.0983, 0.0146, 0.0989, 0.0429, 0.2622, 0.2664, 0.2275], device='cuda:3'), in_proj_covar=tensor([0.0232, 0.0254, 0.0229, 0.0209, 0.0266, 0.0281, 0.0339, 0.0248], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 22:47:33,004 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.831e+02 2.422e+02 2.744e+02 3.332e+02 5.019e+02, threshold=5.488e+02, percent-clipped=0.0 2023-03-09 22:47:36,164 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=100711.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 22:47:44,774 INFO [train.py:898] (3/4) Epoch 28, batch 2600, loss[loss=0.1569, simple_loss=0.2513, pruned_loss=0.03123, over 17346.00 frames. ], tot_loss[loss=0.1555, simple_loss=0.2461, pruned_loss=0.03239, over 3569411.73 frames. ], batch size: 78, lr: 3.99e-03, grad_scale: 16.0 2023-03-09 22:47:58,497 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=100730.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 22:48:31,113 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=100758.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 22:48:32,717 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-09 22:48:43,892 INFO [train.py:898] (3/4) Epoch 28, batch 2650, loss[loss=0.1595, simple_loss=0.2575, pruned_loss=0.0308, over 17105.00 frames. ], tot_loss[loss=0.1552, simple_loss=0.2459, pruned_loss=0.03229, over 3580118.07 frames. ], batch size: 78, lr: 3.99e-03, grad_scale: 16.0 2023-03-09 22:48:59,333 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8440, 4.9198, 4.9938, 4.6441, 4.6908, 4.7030, 4.9994, 5.0024], device='cuda:3'), covar=tensor([0.0070, 0.0059, 0.0063, 0.0111, 0.0057, 0.0160, 0.0063, 0.0089], device='cuda:3'), in_proj_covar=tensor([0.0101, 0.0075, 0.0080, 0.0101, 0.0080, 0.0109, 0.0092, 0.0092], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 22:49:09,565 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=100791.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 22:49:31,290 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.698e+02 2.383e+02 2.847e+02 3.406e+02 8.018e+02, threshold=5.694e+02, percent-clipped=5.0 2023-03-09 22:49:43,076 INFO [train.py:898] (3/4) Epoch 28, batch 2700, loss[loss=0.1641, simple_loss=0.2569, pruned_loss=0.03559, over 18396.00 frames. ], tot_loss[loss=0.1558, simple_loss=0.2464, pruned_loss=0.03262, over 3572572.27 frames. ], batch size: 52, lr: 3.99e-03, grad_scale: 16.0 2023-03-09 22:49:56,547 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4410, 5.2671, 5.6773, 5.7410, 5.3052, 6.2173, 5.9106, 5.4469], device='cuda:3'), covar=tensor([0.1089, 0.0660, 0.0726, 0.0688, 0.1383, 0.0664, 0.0642, 0.1892], device='cuda:3'), in_proj_covar=tensor([0.0376, 0.0307, 0.0333, 0.0335, 0.0341, 0.0448, 0.0300, 0.0440], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 22:50:41,485 INFO [train.py:898] (3/4) Epoch 28, batch 2750, loss[loss=0.1493, simple_loss=0.244, pruned_loss=0.02732, over 18296.00 frames. ], tot_loss[loss=0.1553, simple_loss=0.2459, pruned_loss=0.03238, over 3580837.56 frames. ], batch size: 49, lr: 3.99e-03, grad_scale: 16.0 2023-03-09 22:51:10,648 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100894.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:51:27,175 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.680e+02 2.345e+02 2.707e+02 3.433e+02 5.623e+02, threshold=5.414e+02, percent-clipped=0.0 2023-03-09 22:51:40,788 INFO [train.py:898] (3/4) Epoch 28, batch 2800, loss[loss=0.1516, simple_loss=0.2482, pruned_loss=0.0275, over 18504.00 frames. ], tot_loss[loss=0.1553, simple_loss=0.2463, pruned_loss=0.03218, over 3589787.55 frames. ], batch size: 51, lr: 3.99e-03, grad_scale: 16.0 2023-03-09 22:51:44,756 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4119, 3.8517, 2.2919, 3.8417, 4.7474, 2.6087, 3.6039, 3.7825], device='cuda:3'), covar=tensor([0.0255, 0.1078, 0.1698, 0.0619, 0.0123, 0.1171, 0.0718, 0.0713], device='cuda:3'), in_proj_covar=tensor([0.0187, 0.0284, 0.0211, 0.0201, 0.0145, 0.0187, 0.0225, 0.0233], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 22:52:05,673 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6649, 3.0839, 3.7641, 3.5855, 2.9952, 2.8883, 3.5369, 3.9074], device='cuda:3'), covar=tensor([0.0783, 0.1186, 0.0381, 0.0483, 0.0962, 0.1120, 0.0506, 0.0392], device='cuda:3'), in_proj_covar=tensor([0.0156, 0.0288, 0.0176, 0.0190, 0.0200, 0.0200, 0.0204, 0.0216], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 22:52:07,524 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=100942.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:52:11,240 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7076, 2.9704, 2.5400, 2.9406, 3.6893, 3.6404, 3.2038, 2.9323], device='cuda:3'), covar=tensor([0.0181, 0.0266, 0.0595, 0.0386, 0.0187, 0.0158, 0.0364, 0.0384], device='cuda:3'), in_proj_covar=tensor([0.0148, 0.0148, 0.0169, 0.0168, 0.0144, 0.0131, 0.0162, 0.0166], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 22:52:19,114 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4898, 5.4540, 5.1406, 5.3986, 5.4142, 4.7925, 5.3043, 5.0208], device='cuda:3'), covar=tensor([0.0397, 0.0400, 0.1078, 0.0747, 0.0496, 0.0400, 0.0400, 0.0995], device='cuda:3'), in_proj_covar=tensor([0.0520, 0.0590, 0.0732, 0.0458, 0.0482, 0.0540, 0.0572, 0.0712], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 22:52:38,801 INFO [train.py:898] (3/4) Epoch 28, batch 2850, loss[loss=0.1385, simple_loss=0.2235, pruned_loss=0.02669, over 18267.00 frames. ], tot_loss[loss=0.1555, simple_loss=0.2464, pruned_loss=0.03227, over 3577053.58 frames. ], batch size: 45, lr: 3.99e-03, grad_scale: 16.0 2023-03-09 22:53:26,397 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.467e+02 2.435e+02 2.818e+02 3.350e+02 5.539e+02, threshold=5.635e+02, percent-clipped=3.0 2023-03-09 22:53:37,849 INFO [train.py:898] (3/4) Epoch 28, batch 2900, loss[loss=0.1419, simple_loss=0.2367, pruned_loss=0.02353, over 18291.00 frames. ], tot_loss[loss=0.1551, simple_loss=0.2461, pruned_loss=0.032, over 3581667.36 frames. ], batch size: 49, lr: 3.99e-03, grad_scale: 16.0 2023-03-09 22:54:20,753 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5702, 3.5852, 3.4609, 3.1075, 3.4146, 2.8098, 2.7904, 3.5910], device='cuda:3'), covar=tensor([0.0080, 0.0100, 0.0091, 0.0140, 0.0106, 0.0204, 0.0216, 0.0087], device='cuda:3'), in_proj_covar=tensor([0.0158, 0.0179, 0.0148, 0.0198, 0.0157, 0.0190, 0.0193, 0.0136], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-09 22:54:25,186 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=101058.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:54:37,495 INFO [train.py:898] (3/4) Epoch 28, batch 2950, loss[loss=0.1483, simple_loss=0.2441, pruned_loss=0.02621, over 18637.00 frames. ], tot_loss[loss=0.1554, simple_loss=0.2467, pruned_loss=0.03212, over 3574357.10 frames. ], batch size: 52, lr: 3.98e-03, grad_scale: 16.0 2023-03-09 22:54:56,236 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8669, 5.3725, 2.7835, 5.1271, 5.0899, 5.3757, 5.1392, 2.6594], device='cuda:3'), covar=tensor([0.0247, 0.0060, 0.0790, 0.0087, 0.0078, 0.0062, 0.0091, 0.1053], device='cuda:3'), in_proj_covar=tensor([0.0095, 0.0085, 0.0099, 0.0101, 0.0092, 0.0081, 0.0088, 0.0100], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 22:54:58,160 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=101086.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 22:55:21,310 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=101106.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:55:22,576 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=101107.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:55:24,475 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.540e+02 2.503e+02 3.089e+02 3.620e+02 7.481e+02, threshold=6.178e+02, percent-clipped=1.0 2023-03-09 22:55:35,813 INFO [train.py:898] (3/4) Epoch 28, batch 3000, loss[loss=0.1443, simple_loss=0.2365, pruned_loss=0.026, over 18499.00 frames. ], tot_loss[loss=0.1551, simple_loss=0.246, pruned_loss=0.03206, over 3585724.51 frames. ], batch size: 53, lr: 3.98e-03, grad_scale: 16.0 2023-03-09 22:55:35,814 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 22:55:47,893 INFO [train.py:932] (3/4) Epoch 28, validation: loss=0.1496, simple_loss=0.2475, pruned_loss=0.02587, over 944034.00 frames. 2023-03-09 22:55:47,894 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 22:56:45,987 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=101168.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 22:56:46,675 INFO [train.py:898] (3/4) Epoch 28, batch 3050, loss[loss=0.1494, simple_loss=0.2439, pruned_loss=0.02739, over 18357.00 frames. ], tot_loss[loss=0.1551, simple_loss=0.2458, pruned_loss=0.03218, over 3601399.04 frames. ], batch size: 55, lr: 3.98e-03, grad_scale: 16.0 2023-03-09 22:57:32,632 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.576e+02 2.344e+02 2.759e+02 3.609e+02 9.420e+02, threshold=5.517e+02, percent-clipped=1.0 2023-03-09 22:57:45,989 INFO [train.py:898] (3/4) Epoch 28, batch 3100, loss[loss=0.1289, simple_loss=0.2161, pruned_loss=0.02085, over 17733.00 frames. ], tot_loss[loss=0.155, simple_loss=0.2459, pruned_loss=0.03208, over 3580281.85 frames. ], batch size: 39, lr: 3.98e-03, grad_scale: 16.0 2023-03-09 22:57:49,932 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9815, 4.9578, 2.6218, 4.7833, 4.6073, 4.9658, 4.7242, 2.3517], device='cuda:3'), covar=tensor([0.0255, 0.0131, 0.0997, 0.0135, 0.0143, 0.0141, 0.0179, 0.1678], device='cuda:3'), in_proj_covar=tensor([0.0095, 0.0085, 0.0099, 0.0100, 0.0092, 0.0081, 0.0088, 0.0099], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 22:58:28,942 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4938, 3.5446, 3.3780, 3.0145, 3.3665, 2.7626, 2.6804, 3.5112], device='cuda:3'), covar=tensor([0.0089, 0.0110, 0.0095, 0.0166, 0.0103, 0.0214, 0.0242, 0.0081], device='cuda:3'), in_proj_covar=tensor([0.0157, 0.0177, 0.0147, 0.0196, 0.0156, 0.0188, 0.0191, 0.0135], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-09 22:58:45,542 INFO [train.py:898] (3/4) Epoch 28, batch 3150, loss[loss=0.1593, simple_loss=0.2509, pruned_loss=0.03387, over 18400.00 frames. ], tot_loss[loss=0.1552, simple_loss=0.246, pruned_loss=0.03223, over 3580610.53 frames. ], batch size: 52, lr: 3.98e-03, grad_scale: 16.0 2023-03-09 22:58:46,056 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6080, 2.3600, 2.5328, 2.6528, 3.1354, 4.6261, 4.6807, 3.3206], device='cuda:3'), covar=tensor([0.2195, 0.2749, 0.3231, 0.2065, 0.2573, 0.0320, 0.0373, 0.1082], device='cuda:3'), in_proj_covar=tensor([0.0332, 0.0365, 0.0417, 0.0292, 0.0397, 0.0269, 0.0303, 0.0273], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 22:59:31,371 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.645e+02 2.594e+02 3.004e+02 3.371e+02 7.669e+02, threshold=6.008e+02, percent-clipped=3.0 2023-03-09 22:59:43,362 INFO [train.py:898] (3/4) Epoch 28, batch 3200, loss[loss=0.1479, simple_loss=0.241, pruned_loss=0.02741, over 18278.00 frames. ], tot_loss[loss=0.1552, simple_loss=0.2457, pruned_loss=0.0323, over 3585775.76 frames. ], batch size: 49, lr: 3.98e-03, grad_scale: 16.0 2023-03-09 22:59:45,578 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9096, 3.8954, 3.8002, 3.5305, 3.7216, 3.2143, 3.1504, 3.9653], device='cuda:3'), covar=tensor([0.0079, 0.0115, 0.0074, 0.0117, 0.0103, 0.0161, 0.0188, 0.0060], device='cuda:3'), in_proj_covar=tensor([0.0157, 0.0177, 0.0147, 0.0196, 0.0156, 0.0188, 0.0191, 0.0135], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-09 23:00:41,868 INFO [train.py:898] (3/4) Epoch 28, batch 3250, loss[loss=0.1777, simple_loss=0.2694, pruned_loss=0.04301, over 18457.00 frames. ], tot_loss[loss=0.1559, simple_loss=0.2464, pruned_loss=0.03267, over 3591108.99 frames. ], batch size: 59, lr: 3.98e-03, grad_scale: 16.0 2023-03-09 23:01:02,817 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=101386.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 23:01:29,437 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.724e+02 2.697e+02 3.326e+02 3.955e+02 8.507e+02, threshold=6.653e+02, percent-clipped=3.0 2023-03-09 23:01:40,848 INFO [train.py:898] (3/4) Epoch 28, batch 3300, loss[loss=0.1641, simple_loss=0.2586, pruned_loss=0.03482, over 18367.00 frames. ], tot_loss[loss=0.1565, simple_loss=0.2473, pruned_loss=0.03282, over 3598761.13 frames. ], batch size: 55, lr: 3.98e-03, grad_scale: 32.0 2023-03-09 23:02:00,150 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=101434.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 23:02:33,273 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=101463.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:02:39,848 INFO [train.py:898] (3/4) Epoch 28, batch 3350, loss[loss=0.1625, simple_loss=0.2567, pruned_loss=0.03411, over 16159.00 frames. ], tot_loss[loss=0.1558, simple_loss=0.2468, pruned_loss=0.03245, over 3602770.20 frames. ], batch size: 94, lr: 3.98e-03, grad_scale: 16.0 2023-03-09 23:02:59,110 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-09 23:03:15,097 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4522, 2.8447, 4.1895, 3.6031, 2.6920, 4.4273, 3.7840, 2.6977], device='cuda:3'), covar=tensor([0.0593, 0.1579, 0.0370, 0.0484, 0.1590, 0.0255, 0.0641, 0.1067], device='cuda:3'), in_proj_covar=tensor([0.0224, 0.0248, 0.0236, 0.0175, 0.0230, 0.0221, 0.0260, 0.0200], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 23:03:28,358 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.802e+02 2.584e+02 3.127e+02 3.784e+02 7.730e+02, threshold=6.254e+02, percent-clipped=1.0 2023-03-09 23:03:38,776 INFO [train.py:898] (3/4) Epoch 28, batch 3400, loss[loss=0.1428, simple_loss=0.2277, pruned_loss=0.02889, over 18496.00 frames. ], tot_loss[loss=0.1559, simple_loss=0.2466, pruned_loss=0.03258, over 3594899.71 frames. ], batch size: 47, lr: 3.98e-03, grad_scale: 16.0 2023-03-09 23:04:38,414 INFO [train.py:898] (3/4) Epoch 28, batch 3450, loss[loss=0.1436, simple_loss=0.2296, pruned_loss=0.02877, over 18274.00 frames. ], tot_loss[loss=0.1551, simple_loss=0.2457, pruned_loss=0.03224, over 3596568.84 frames. ], batch size: 47, lr: 3.98e-03, grad_scale: 16.0 2023-03-09 23:05:28,088 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.546e+02 2.589e+02 3.041e+02 3.646e+02 6.515e+02, threshold=6.081e+02, percent-clipped=1.0 2023-03-09 23:05:37,347 INFO [train.py:898] (3/4) Epoch 28, batch 3500, loss[loss=0.1385, simple_loss=0.2189, pruned_loss=0.02902, over 18414.00 frames. ], tot_loss[loss=0.1552, simple_loss=0.2462, pruned_loss=0.03214, over 3591997.79 frames. ], batch size: 43, lr: 3.97e-03, grad_scale: 8.0 2023-03-09 23:05:46,259 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-09 23:06:34,340 INFO [train.py:898] (3/4) Epoch 28, batch 3550, loss[loss=0.1709, simple_loss=0.2656, pruned_loss=0.03807, over 18091.00 frames. ], tot_loss[loss=0.1549, simple_loss=0.2457, pruned_loss=0.03208, over 3595708.50 frames. ], batch size: 62, lr: 3.97e-03, grad_scale: 8.0 2023-03-09 23:06:50,702 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=101684.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:07:08,836 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4346, 4.7950, 4.4306, 4.6515, 4.5369, 4.4622, 4.8925, 4.8132], device='cuda:3'), covar=tensor([0.1107, 0.0861, 0.1662, 0.0735, 0.1359, 0.0763, 0.0721, 0.0785], device='cuda:3'), in_proj_covar=tensor([0.0645, 0.0569, 0.0408, 0.0594, 0.0788, 0.0590, 0.0805, 0.0618], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 23:07:19,611 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.673e+02 2.328e+02 2.616e+02 3.053e+02 4.840e+02, threshold=5.233e+02, percent-clipped=0.0 2023-03-09 23:07:28,385 INFO [train.py:898] (3/4) Epoch 28, batch 3600, loss[loss=0.1515, simple_loss=0.2509, pruned_loss=0.02602, over 18380.00 frames. ], tot_loss[loss=0.1549, simple_loss=0.246, pruned_loss=0.03197, over 3587778.72 frames. ], batch size: 55, lr: 3.97e-03, grad_scale: 8.0 2023-03-09 23:07:51,952 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4607, 2.9910, 4.2662, 3.5677, 2.6041, 4.5494, 3.8477, 2.7817], device='cuda:3'), covar=tensor([0.0589, 0.1414, 0.0346, 0.0510, 0.1660, 0.0198, 0.0645, 0.1041], device='cuda:3'), in_proj_covar=tensor([0.0223, 0.0246, 0.0235, 0.0174, 0.0228, 0.0219, 0.0258, 0.0199], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 23:07:57,176 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=101745.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:08:02,781 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0231, 4.0962, 3.5594, 4.0059, 4.1001, 3.6751, 3.9468, 3.7394], device='cuda:3'), covar=tensor([0.0911, 0.0995, 0.2068, 0.1052, 0.0806, 0.0591, 0.0925, 0.1425], device='cuda:3'), in_proj_covar=tensor([0.0522, 0.0590, 0.0730, 0.0458, 0.0483, 0.0540, 0.0573, 0.0711], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 23:08:29,826 INFO [train.py:898] (3/4) Epoch 29, batch 0, loss[loss=0.1725, simple_loss=0.2696, pruned_loss=0.03771, over 18400.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.2696, pruned_loss=0.03771, over 18400.00 frames. ], batch size: 50, lr: 3.90e-03, grad_scale: 8.0 2023-03-09 23:08:29,826 INFO [train.py:923] (3/4) Computing validation loss 2023-03-09 23:08:41,776 INFO [train.py:932] (3/4) Epoch 29, validation: loss=0.1494, simple_loss=0.2476, pruned_loss=0.02556, over 944034.00 frames. 2023-03-09 23:08:41,777 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-09 23:08:54,535 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=101763.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:09:40,300 INFO [train.py:898] (3/4) Epoch 29, batch 50, loss[loss=0.1599, simple_loss=0.2562, pruned_loss=0.03187, over 18516.00 frames. ], tot_loss[loss=0.1579, simple_loss=0.2481, pruned_loss=0.03384, over 812621.12 frames. ], batch size: 59, lr: 3.90e-03, grad_scale: 4.0 2023-03-09 23:09:50,573 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=101811.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:09:51,448 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.892e+02 2.454e+02 2.836e+02 3.650e+02 1.455e+03, threshold=5.673e+02, percent-clipped=9.0 2023-03-09 23:10:38,670 INFO [train.py:898] (3/4) Epoch 29, batch 100, loss[loss=0.1712, simple_loss=0.2569, pruned_loss=0.04279, over 17668.00 frames. ], tot_loss[loss=0.1569, simple_loss=0.2476, pruned_loss=0.03308, over 1434860.46 frames. ], batch size: 70, lr: 3.90e-03, grad_scale: 4.0 2023-03-09 23:10:52,891 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0297, 5.1609, 5.1634, 4.8902, 4.9116, 4.9043, 5.2171, 5.2742], device='cuda:3'), covar=tensor([0.0073, 0.0054, 0.0054, 0.0111, 0.0060, 0.0150, 0.0068, 0.0082], device='cuda:3'), in_proj_covar=tensor([0.0101, 0.0075, 0.0080, 0.0100, 0.0079, 0.0109, 0.0092, 0.0092], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 23:10:58,506 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-09 23:11:38,498 INFO [train.py:898] (3/4) Epoch 29, batch 150, loss[loss=0.1745, simple_loss=0.2676, pruned_loss=0.04072, over 17737.00 frames. ], tot_loss[loss=0.1564, simple_loss=0.2472, pruned_loss=0.0328, over 1896077.26 frames. ], batch size: 70, lr: 3.90e-03, grad_scale: 4.0 2023-03-09 23:11:49,332 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.704e+02 2.518e+02 2.968e+02 3.653e+02 8.556e+02, threshold=5.936e+02, percent-clipped=5.0 2023-03-09 23:11:53,925 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8070, 5.1857, 2.5178, 5.0215, 4.9446, 5.2093, 4.9613, 2.7157], device='cuda:3'), covar=tensor([0.0260, 0.0088, 0.0863, 0.0087, 0.0074, 0.0067, 0.0096, 0.0982], device='cuda:3'), in_proj_covar=tensor([0.0094, 0.0084, 0.0098, 0.0099, 0.0092, 0.0080, 0.0087, 0.0099], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 23:12:00,397 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=101920.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 23:12:38,454 INFO [train.py:898] (3/4) Epoch 29, batch 200, loss[loss=0.1737, simple_loss=0.2658, pruned_loss=0.04079, over 18347.00 frames. ], tot_loss[loss=0.1574, simple_loss=0.2486, pruned_loss=0.03309, over 2274784.74 frames. ], batch size: 56, lr: 3.90e-03, grad_scale: 4.0 2023-03-09 23:12:49,412 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2464, 5.2307, 5.5867, 5.6594, 5.1105, 6.0804, 5.7528, 5.3310], device='cuda:3'), covar=tensor([0.1304, 0.0678, 0.0757, 0.0643, 0.1769, 0.0719, 0.0592, 0.1803], device='cuda:3'), in_proj_covar=tensor([0.0379, 0.0310, 0.0336, 0.0337, 0.0346, 0.0451, 0.0305, 0.0444], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 23:13:12,320 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=101981.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 23:13:18,817 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=101987.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:13:42,396 INFO [train.py:898] (3/4) Epoch 29, batch 250, loss[loss=0.1638, simple_loss=0.265, pruned_loss=0.03135, over 18631.00 frames. ], tot_loss[loss=0.1562, simple_loss=0.2471, pruned_loss=0.03265, over 2570662.33 frames. ], batch size: 52, lr: 3.90e-03, grad_scale: 4.0 2023-03-09 23:13:47,363 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3075, 5.2879, 4.9261, 5.2089, 5.2073, 4.5711, 5.1009, 4.8842], device='cuda:3'), covar=tensor([0.0430, 0.0466, 0.1316, 0.0787, 0.0641, 0.0480, 0.0498, 0.1172], device='cuda:3'), in_proj_covar=tensor([0.0529, 0.0599, 0.0742, 0.0465, 0.0491, 0.0548, 0.0581, 0.0722], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 23:13:52,741 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.574e+02 2.573e+02 2.918e+02 3.581e+02 7.098e+02, threshold=5.836e+02, percent-clipped=3.0 2023-03-09 23:13:58,185 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8138, 3.6855, 5.0577, 4.4157, 3.4762, 3.1651, 4.5335, 5.2452], device='cuda:3'), covar=tensor([0.0818, 0.1469, 0.0210, 0.0393, 0.0885, 0.1141, 0.0363, 0.0262], device='cuda:3'), in_proj_covar=tensor([0.0156, 0.0288, 0.0177, 0.0190, 0.0200, 0.0201, 0.0203, 0.0217], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 23:14:26,061 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=102040.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:14:35,229 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=102048.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:14:41,083 INFO [train.py:898] (3/4) Epoch 29, batch 300, loss[loss=0.1586, simple_loss=0.2536, pruned_loss=0.03177, over 18635.00 frames. ], tot_loss[loss=0.1568, simple_loss=0.2479, pruned_loss=0.03283, over 2800637.64 frames. ], batch size: 52, lr: 3.90e-03, grad_scale: 4.0 2023-03-09 23:15:40,930 INFO [train.py:898] (3/4) Epoch 29, batch 350, loss[loss=0.1433, simple_loss=0.223, pruned_loss=0.03181, over 18380.00 frames. ], tot_loss[loss=0.1556, simple_loss=0.2464, pruned_loss=0.03238, over 2981302.13 frames. ], batch size: 42, lr: 3.89e-03, grad_scale: 4.0 2023-03-09 23:15:51,452 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.890e+02 2.462e+02 2.956e+02 3.341e+02 6.335e+02, threshold=5.913e+02, percent-clipped=2.0 2023-03-09 23:16:40,560 INFO [train.py:898] (3/4) Epoch 29, batch 400, loss[loss=0.1519, simple_loss=0.2487, pruned_loss=0.02757, over 18401.00 frames. ], tot_loss[loss=0.1553, simple_loss=0.246, pruned_loss=0.03231, over 3120284.02 frames. ], batch size: 52, lr: 3.89e-03, grad_scale: 8.0 2023-03-09 23:17:40,602 INFO [train.py:898] (3/4) Epoch 29, batch 450, loss[loss=0.1694, simple_loss=0.2643, pruned_loss=0.03723, over 18571.00 frames. ], tot_loss[loss=0.1547, simple_loss=0.2452, pruned_loss=0.03214, over 3227286.58 frames. ], batch size: 54, lr: 3.89e-03, grad_scale: 8.0 2023-03-09 23:17:50,993 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.718e+02 2.619e+02 3.042e+02 3.635e+02 5.576e+02, threshold=6.084e+02, percent-clipped=0.0 2023-03-09 23:18:20,967 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5491, 2.9220, 4.3832, 3.6628, 2.7365, 4.5811, 3.9789, 2.8765], device='cuda:3'), covar=tensor([0.0597, 0.1575, 0.0346, 0.0526, 0.1627, 0.0270, 0.0571, 0.1006], device='cuda:3'), in_proj_covar=tensor([0.0225, 0.0248, 0.0236, 0.0175, 0.0231, 0.0222, 0.0261, 0.0200], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 23:18:37,558 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0122, 5.1036, 5.1295, 4.7757, 4.9503, 4.8894, 5.1201, 5.1705], device='cuda:3'), covar=tensor([0.0074, 0.0060, 0.0063, 0.0122, 0.0052, 0.0152, 0.0071, 0.0083], device='cuda:3'), in_proj_covar=tensor([0.0101, 0.0075, 0.0080, 0.0100, 0.0079, 0.0108, 0.0092, 0.0091], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 23:18:40,652 INFO [train.py:898] (3/4) Epoch 29, batch 500, loss[loss=0.1561, simple_loss=0.2464, pruned_loss=0.03288, over 18606.00 frames. ], tot_loss[loss=0.155, simple_loss=0.2457, pruned_loss=0.03213, over 3306828.64 frames. ], batch size: 52, lr: 3.89e-03, grad_scale: 8.0 2023-03-09 23:19:07,685 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=102276.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 23:19:40,243 INFO [train.py:898] (3/4) Epoch 29, batch 550, loss[loss=0.1606, simple_loss=0.2547, pruned_loss=0.03325, over 12803.00 frames. ], tot_loss[loss=0.1555, simple_loss=0.2465, pruned_loss=0.03229, over 3361217.00 frames. ], batch size: 130, lr: 3.89e-03, grad_scale: 8.0 2023-03-09 23:19:50,540 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.948e+02 2.553e+02 3.006e+02 3.452e+02 5.617e+02, threshold=6.011e+02, percent-clipped=0.0 2023-03-09 23:20:07,486 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6039, 3.0362, 4.4966, 3.7890, 2.9486, 4.6912, 4.0834, 3.0179], device='cuda:3'), covar=tensor([0.0603, 0.1492, 0.0287, 0.0496, 0.1449, 0.0236, 0.0549, 0.0907], device='cuda:3'), in_proj_covar=tensor([0.0226, 0.0249, 0.0237, 0.0175, 0.0231, 0.0222, 0.0261, 0.0201], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 23:20:24,931 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=102340.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:20:28,210 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=102343.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:20:37,307 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.63 vs. limit=2.0 2023-03-09 23:20:39,914 INFO [train.py:898] (3/4) Epoch 29, batch 600, loss[loss=0.1509, simple_loss=0.2335, pruned_loss=0.03419, over 18383.00 frames. ], tot_loss[loss=0.1557, simple_loss=0.2468, pruned_loss=0.03231, over 3409905.11 frames. ], batch size: 50, lr: 3.89e-03, grad_scale: 8.0 2023-03-09 23:21:17,850 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6464, 2.3047, 2.6161, 2.6539, 3.0599, 4.7291, 4.7163, 3.1852], device='cuda:3'), covar=tensor([0.2093, 0.2701, 0.3166, 0.2049, 0.2711, 0.0313, 0.0381, 0.1152], device='cuda:3'), in_proj_covar=tensor([0.0332, 0.0365, 0.0416, 0.0292, 0.0398, 0.0270, 0.0303, 0.0274], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:3') 2023-03-09 23:21:20,777 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=102388.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:21:39,535 INFO [train.py:898] (3/4) Epoch 29, batch 650, loss[loss=0.148, simple_loss=0.2382, pruned_loss=0.02891, over 18333.00 frames. ], tot_loss[loss=0.1556, simple_loss=0.2467, pruned_loss=0.0323, over 3439267.46 frames. ], batch size: 46, lr: 3.89e-03, grad_scale: 8.0 2023-03-09 23:21:49,326 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.777e+02 2.518e+02 2.994e+02 3.652e+02 5.297e+02, threshold=5.988e+02, percent-clipped=0.0 2023-03-09 23:22:06,455 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3929, 5.3535, 5.6867, 5.7359, 5.3314, 6.2259, 5.9318, 5.4141], device='cuda:3'), covar=tensor([0.1064, 0.0665, 0.0710, 0.0605, 0.1353, 0.0716, 0.0585, 0.1739], device='cuda:3'), in_proj_covar=tensor([0.0381, 0.0311, 0.0338, 0.0339, 0.0347, 0.0452, 0.0306, 0.0448], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 23:22:38,232 INFO [train.py:898] (3/4) Epoch 29, batch 700, loss[loss=0.1376, simple_loss=0.2189, pruned_loss=0.02811, over 18407.00 frames. ], tot_loss[loss=0.1548, simple_loss=0.2457, pruned_loss=0.03189, over 3482323.18 frames. ], batch size: 42, lr: 3.89e-03, grad_scale: 8.0 2023-03-09 23:23:37,713 INFO [train.py:898] (3/4) Epoch 29, batch 750, loss[loss=0.1592, simple_loss=0.2614, pruned_loss=0.02847, over 16087.00 frames. ], tot_loss[loss=0.1553, simple_loss=0.2463, pruned_loss=0.0321, over 3490665.29 frames. ], batch size: 94, lr: 3.89e-03, grad_scale: 8.0 2023-03-09 23:23:48,502 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.674e+02 2.550e+02 3.053e+02 3.427e+02 7.248e+02, threshold=6.105e+02, percent-clipped=3.0 2023-03-09 23:23:59,003 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1319, 5.2182, 5.2200, 4.9126, 4.9636, 4.9514, 5.2914, 5.2847], device='cuda:3'), covar=tensor([0.0068, 0.0058, 0.0053, 0.0108, 0.0054, 0.0158, 0.0064, 0.0076], device='cuda:3'), in_proj_covar=tensor([0.0100, 0.0074, 0.0080, 0.0100, 0.0079, 0.0108, 0.0091, 0.0091], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 23:24:18,342 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9376, 5.4651, 5.4396, 5.4368, 4.9140, 5.3805, 4.8468, 5.3246], device='cuda:3'), covar=tensor([0.0267, 0.0245, 0.0175, 0.0459, 0.0402, 0.0218, 0.0982, 0.0320], device='cuda:3'), in_proj_covar=tensor([0.0237, 0.0281, 0.0280, 0.0360, 0.0290, 0.0289, 0.0321, 0.0281], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0008, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 23:24:35,934 INFO [train.py:898] (3/4) Epoch 29, batch 800, loss[loss=0.1484, simple_loss=0.2408, pruned_loss=0.02801, over 18383.00 frames. ], tot_loss[loss=0.1553, simple_loss=0.2465, pruned_loss=0.03205, over 3519607.25 frames. ], batch size: 50, lr: 3.89e-03, grad_scale: 8.0 2023-03-09 23:24:57,282 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-09 23:25:03,702 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=102576.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 23:25:35,087 INFO [train.py:898] (3/4) Epoch 29, batch 850, loss[loss=0.1724, simple_loss=0.2598, pruned_loss=0.04247, over 18213.00 frames. ], tot_loss[loss=0.155, simple_loss=0.2463, pruned_loss=0.0319, over 3539090.83 frames. ], batch size: 60, lr: 3.88e-03, grad_scale: 8.0 2023-03-09 23:25:44,474 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.69 vs. limit=5.0 2023-03-09 23:25:45,058 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8614, 4.1026, 2.4866, 3.9512, 5.2510, 2.4980, 3.8084, 4.0698], device='cuda:3'), covar=tensor([0.0230, 0.1203, 0.1693, 0.0684, 0.0092, 0.1355, 0.0742, 0.0732], device='cuda:3'), in_proj_covar=tensor([0.0189, 0.0288, 0.0213, 0.0204, 0.0147, 0.0188, 0.0226, 0.0236], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0004, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 23:25:45,741 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.923e+02 2.608e+02 3.070e+02 3.838e+02 1.078e+03, threshold=6.141e+02, percent-clipped=3.0 2023-03-09 23:25:47,219 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4637, 3.0840, 4.3808, 3.7289, 2.7934, 4.5861, 3.8612, 2.9077], device='cuda:3'), covar=tensor([0.0647, 0.1340, 0.0309, 0.0445, 0.1530, 0.0201, 0.0593, 0.1013], device='cuda:3'), in_proj_covar=tensor([0.0227, 0.0249, 0.0237, 0.0175, 0.0231, 0.0222, 0.0262, 0.0202], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-09 23:26:00,306 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=102624.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 23:26:10,172 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9234, 4.2270, 4.1229, 4.2211, 3.8523, 4.1288, 3.8430, 4.1506], device='cuda:3'), covar=tensor([0.0291, 0.0308, 0.0278, 0.0550, 0.0353, 0.0257, 0.0817, 0.0343], device='cuda:3'), in_proj_covar=tensor([0.0237, 0.0282, 0.0282, 0.0362, 0.0291, 0.0291, 0.0322, 0.0282], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0008, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 23:26:22,287 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=102643.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:26:34,018 INFO [train.py:898] (3/4) Epoch 29, batch 900, loss[loss=0.1677, simple_loss=0.2618, pruned_loss=0.03684, over 18346.00 frames. ], tot_loss[loss=0.1556, simple_loss=0.2464, pruned_loss=0.03239, over 3541434.17 frames. ], batch size: 56, lr: 3.88e-03, grad_scale: 8.0 2023-03-09 23:26:39,937 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5636, 6.0404, 5.6615, 5.8625, 5.6553, 5.5046, 6.1349, 6.0786], device='cuda:3'), covar=tensor([0.1117, 0.0891, 0.0422, 0.0735, 0.1434, 0.0725, 0.0579, 0.0730], device='cuda:3'), in_proj_covar=tensor([0.0650, 0.0572, 0.0409, 0.0597, 0.0793, 0.0594, 0.0814, 0.0624], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 23:27:18,911 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=102691.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:27:21,372 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0383, 5.4981, 2.6191, 5.3852, 5.1427, 5.5118, 5.2925, 2.7728], device='cuda:3'), covar=tensor([0.0237, 0.0101, 0.0950, 0.0077, 0.0112, 0.0111, 0.0119, 0.1110], device='cuda:3'), in_proj_covar=tensor([0.0094, 0.0085, 0.0099, 0.0101, 0.0092, 0.0081, 0.0088, 0.0100], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-09 23:27:32,922 INFO [train.py:898] (3/4) Epoch 29, batch 950, loss[loss=0.1574, simple_loss=0.2541, pruned_loss=0.0304, over 18121.00 frames. ], tot_loss[loss=0.1551, simple_loss=0.2461, pruned_loss=0.0321, over 3559431.92 frames. ], batch size: 62, lr: 3.88e-03, grad_scale: 8.0 2023-03-09 23:27:43,050 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.728e+02 2.495e+02 2.903e+02 3.483e+02 7.298e+02, threshold=5.805e+02, percent-clipped=1.0 2023-03-09 23:28:24,488 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.09 vs. limit=5.0 2023-03-09 23:28:31,532 INFO [train.py:898] (3/4) Epoch 29, batch 1000, loss[loss=0.1709, simple_loss=0.2618, pruned_loss=0.03997, over 18483.00 frames. ], tot_loss[loss=0.1553, simple_loss=0.2465, pruned_loss=0.03199, over 3563523.15 frames. ], batch size: 53, lr: 3.88e-03, grad_scale: 8.0 2023-03-09 23:28:48,404 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8283, 5.3747, 5.3118, 5.3465, 4.7438, 5.2212, 4.7124, 5.1900], device='cuda:3'), covar=tensor([0.0280, 0.0243, 0.0204, 0.0437, 0.0450, 0.0238, 0.1067, 0.0364], device='cuda:3'), in_proj_covar=tensor([0.0239, 0.0283, 0.0282, 0.0363, 0.0292, 0.0292, 0.0322, 0.0284], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0008, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 23:28:56,540 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.82 vs. limit=2.0 2023-03-09 23:29:00,185 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5682, 2.2460, 2.4281, 2.7315, 2.8377, 4.6982, 4.8144, 3.2598], device='cuda:3'), covar=tensor([0.2519, 0.3403, 0.3966, 0.2239, 0.3924, 0.0371, 0.0383, 0.1186], device='cuda:3'), in_proj_covar=tensor([0.0333, 0.0366, 0.0417, 0.0294, 0.0398, 0.0271, 0.0305, 0.0275], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0002], device='cuda:3') 2023-03-09 23:29:04,642 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8218, 4.2178, 2.4370, 3.9960, 5.1985, 2.5609, 3.7383, 3.9011], device='cuda:3'), covar=tensor([0.0222, 0.0999, 0.1833, 0.0739, 0.0097, 0.1485, 0.0811, 0.0785], device='cuda:3'), in_proj_covar=tensor([0.0188, 0.0285, 0.0211, 0.0202, 0.0146, 0.0187, 0.0225, 0.0233], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 23:29:11,634 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-09 23:29:31,352 INFO [train.py:898] (3/4) Epoch 29, batch 1050, loss[loss=0.1679, simple_loss=0.2633, pruned_loss=0.03627, over 18356.00 frames. ], tot_loss[loss=0.1555, simple_loss=0.2469, pruned_loss=0.032, over 3567036.76 frames. ], batch size: 56, lr: 3.88e-03, grad_scale: 8.0 2023-03-09 23:29:34,211 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0388, 3.8537, 5.3153, 3.1946, 4.6612, 2.7394, 3.2260, 1.8475], device='cuda:3'), covar=tensor([0.1109, 0.0939, 0.0137, 0.0886, 0.0409, 0.2460, 0.2478, 0.2270], device='cuda:3'), in_proj_covar=tensor([0.0235, 0.0256, 0.0233, 0.0211, 0.0267, 0.0283, 0.0341, 0.0250], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 23:29:42,514 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.563e+02 2.351e+02 2.635e+02 3.103e+02 8.908e+02, threshold=5.271e+02, percent-clipped=2.0 2023-03-09 23:30:27,427 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=102851.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:30:29,920 INFO [train.py:898] (3/4) Epoch 29, batch 1100, loss[loss=0.1398, simple_loss=0.2221, pruned_loss=0.02873, over 17707.00 frames. ], tot_loss[loss=0.1555, simple_loss=0.2469, pruned_loss=0.03206, over 3578012.93 frames. ], batch size: 39, lr: 3.88e-03, grad_scale: 8.0 2023-03-09 23:31:10,262 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=102887.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:31:28,100 INFO [train.py:898] (3/4) Epoch 29, batch 1150, loss[loss=0.1403, simple_loss=0.2258, pruned_loss=0.02745, over 18421.00 frames. ], tot_loss[loss=0.1555, simple_loss=0.2467, pruned_loss=0.03215, over 3578716.11 frames. ], batch size: 43, lr: 3.88e-03, grad_scale: 8.0 2023-03-09 23:31:32,439 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9789, 4.9478, 4.9969, 4.7574, 4.7942, 4.8612, 5.1078, 5.1086], device='cuda:3'), covar=tensor([0.0085, 0.0079, 0.0070, 0.0126, 0.0068, 0.0166, 0.0089, 0.0098], device='cuda:3'), in_proj_covar=tensor([0.0100, 0.0074, 0.0080, 0.0099, 0.0079, 0.0108, 0.0091, 0.0091], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 23:31:38,786 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.637e+02 2.444e+02 3.032e+02 3.732e+02 6.439e+02, threshold=6.064e+02, percent-clipped=4.0 2023-03-09 23:31:39,283 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=102912.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:32:22,237 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=102948.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:32:27,593 INFO [train.py:898] (3/4) Epoch 29, batch 1200, loss[loss=0.1555, simple_loss=0.2508, pruned_loss=0.03013, over 16120.00 frames. ], tot_loss[loss=0.1547, simple_loss=0.2459, pruned_loss=0.03178, over 3586077.81 frames. ], batch size: 94, lr: 3.88e-03, grad_scale: 8.0 2023-03-09 23:32:57,437 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8740, 3.7298, 5.1531, 4.5803, 3.4510, 3.1093, 4.7399, 5.4276], device='cuda:3'), covar=tensor([0.0762, 0.1643, 0.0199, 0.0372, 0.0933, 0.1188, 0.0335, 0.0165], device='cuda:3'), in_proj_covar=tensor([0.0153, 0.0282, 0.0175, 0.0186, 0.0197, 0.0196, 0.0201, 0.0214], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 23:33:17,608 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9692, 4.0177, 4.0219, 3.8887, 3.8799, 3.9228, 4.0908, 4.0965], device='cuda:3'), covar=tensor([0.0095, 0.0074, 0.0081, 0.0118, 0.0073, 0.0149, 0.0074, 0.0085], device='cuda:3'), in_proj_covar=tensor([0.0100, 0.0074, 0.0080, 0.0100, 0.0079, 0.0108, 0.0091, 0.0091], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-09 23:33:26,585 INFO [train.py:898] (3/4) Epoch 29, batch 1250, loss[loss=0.1627, simple_loss=0.2561, pruned_loss=0.03469, over 18364.00 frames. ], tot_loss[loss=0.1554, simple_loss=0.2466, pruned_loss=0.03211, over 3581484.49 frames. ], batch size: 56, lr: 3.88e-03, grad_scale: 8.0 2023-03-09 23:33:36,999 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.897e+02 2.667e+02 2.982e+02 3.378e+02 7.201e+02, threshold=5.965e+02, percent-clipped=1.0 2023-03-09 23:33:43,309 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0218, 5.4609, 5.4589, 5.4313, 4.9156, 5.3549, 4.8506, 5.3787], device='cuda:3'), covar=tensor([0.0217, 0.0249, 0.0176, 0.0424, 0.0391, 0.0229, 0.0978, 0.0300], device='cuda:3'), in_proj_covar=tensor([0.0236, 0.0281, 0.0280, 0.0360, 0.0290, 0.0291, 0.0319, 0.0281], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0008, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 23:34:22,176 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4923, 5.4607, 5.1392, 5.3883, 5.4431, 4.8158, 5.3259, 5.0285], device='cuda:3'), covar=tensor([0.0450, 0.0419, 0.1170, 0.0793, 0.0553, 0.0400, 0.0398, 0.1065], device='cuda:3'), in_proj_covar=tensor([0.0524, 0.0593, 0.0733, 0.0458, 0.0482, 0.0538, 0.0571, 0.0713], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 23:34:25,436 INFO [train.py:898] (3/4) Epoch 29, batch 1300, loss[loss=0.1239, simple_loss=0.2076, pruned_loss=0.02006, over 18416.00 frames. ], tot_loss[loss=0.1553, simple_loss=0.2461, pruned_loss=0.03227, over 3576956.85 frames. ], batch size: 43, lr: 3.88e-03, grad_scale: 8.0 2023-03-09 23:34:35,186 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8412, 3.7799, 3.6379, 3.3663, 3.5868, 3.1418, 3.0222, 3.8846], device='cuda:3'), covar=tensor([0.0079, 0.0124, 0.0093, 0.0139, 0.0115, 0.0192, 0.0211, 0.0065], device='cuda:3'), in_proj_covar=tensor([0.0160, 0.0180, 0.0151, 0.0201, 0.0160, 0.0192, 0.0196, 0.0140], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-09 23:35:19,410 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103098.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:35:24,708 INFO [train.py:898] (3/4) Epoch 29, batch 1350, loss[loss=0.1499, simple_loss=0.2408, pruned_loss=0.02953, over 18391.00 frames. ], tot_loss[loss=0.1555, simple_loss=0.2463, pruned_loss=0.03231, over 3579209.05 frames. ], batch size: 48, lr: 3.88e-03, grad_scale: 8.0 2023-03-09 23:35:35,180 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.567e+02 2.524e+02 2.935e+02 3.557e+02 7.291e+02, threshold=5.869e+02, percent-clipped=2.0 2023-03-09 23:35:58,998 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103132.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:36:11,528 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.5045, 5.3456, 5.7564, 5.8175, 5.4190, 6.3136, 6.0420, 5.6587], device='cuda:3'), covar=tensor([0.1068, 0.0650, 0.0779, 0.0780, 0.1452, 0.0635, 0.0590, 0.1429], device='cuda:3'), in_proj_covar=tensor([0.0381, 0.0310, 0.0337, 0.0339, 0.0347, 0.0450, 0.0305, 0.0447], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 23:36:23,954 INFO [train.py:898] (3/4) Epoch 29, batch 1400, loss[loss=0.1369, simple_loss=0.2147, pruned_loss=0.02955, over 18445.00 frames. ], tot_loss[loss=0.1552, simple_loss=0.2456, pruned_loss=0.03236, over 3576140.32 frames. ], batch size: 43, lr: 3.87e-03, grad_scale: 8.0 2023-03-09 23:36:31,073 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=103159.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:37:11,246 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=103193.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:37:22,802 INFO [train.py:898] (3/4) Epoch 29, batch 1450, loss[loss=0.1661, simple_loss=0.2593, pruned_loss=0.03642, over 17087.00 frames. ], tot_loss[loss=0.1559, simple_loss=0.2468, pruned_loss=0.03248, over 3577210.25 frames. ], batch size: 78, lr: 3.87e-03, grad_scale: 8.0 2023-03-09 23:37:27,608 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=103207.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:37:33,044 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.757e+02 2.357e+02 2.757e+02 3.131e+02 6.312e+02, threshold=5.514e+02, percent-clipped=1.0 2023-03-09 23:37:39,504 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.67 vs. limit=2.0 2023-03-09 23:38:09,528 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=103243.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:38:21,769 INFO [train.py:898] (3/4) Epoch 29, batch 1500, loss[loss=0.1274, simple_loss=0.2102, pruned_loss=0.02225, over 18522.00 frames. ], tot_loss[loss=0.1556, simple_loss=0.2464, pruned_loss=0.03237, over 3574374.60 frames. ], batch size: 44, lr: 3.87e-03, grad_scale: 8.0 2023-03-09 23:38:34,476 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4643, 6.0011, 5.6075, 5.8078, 5.6212, 5.4445, 6.1052, 6.0252], device='cuda:3'), covar=tensor([0.1214, 0.0741, 0.0457, 0.0659, 0.1269, 0.0724, 0.0537, 0.0695], device='cuda:3'), in_proj_covar=tensor([0.0645, 0.0569, 0.0407, 0.0594, 0.0791, 0.0592, 0.0809, 0.0622], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 23:38:34,688 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7384, 3.6209, 5.0474, 4.4076, 3.3688, 3.0281, 4.4816, 5.2570], device='cuda:3'), covar=tensor([0.0861, 0.1667, 0.0201, 0.0419, 0.0960, 0.1263, 0.0390, 0.0206], device='cuda:3'), in_proj_covar=tensor([0.0155, 0.0286, 0.0176, 0.0189, 0.0199, 0.0199, 0.0203, 0.0216], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 23:39:11,324 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3456, 5.2872, 5.6138, 5.5904, 5.2415, 6.1857, 5.8501, 5.4691], device='cuda:3'), covar=tensor([0.1218, 0.0674, 0.0898, 0.0850, 0.1533, 0.0768, 0.0742, 0.1583], device='cuda:3'), in_proj_covar=tensor([0.0380, 0.0308, 0.0336, 0.0338, 0.0346, 0.0449, 0.0305, 0.0445], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 23:39:20,022 INFO [train.py:898] (3/4) Epoch 29, batch 1550, loss[loss=0.1314, simple_loss=0.2154, pruned_loss=0.02366, over 18275.00 frames. ], tot_loss[loss=0.1555, simple_loss=0.2463, pruned_loss=0.03234, over 3575697.84 frames. ], batch size: 45, lr: 3.87e-03, grad_scale: 8.0 2023-03-09 23:39:29,926 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.405e+02 2.624e+02 2.952e+02 3.606e+02 8.116e+02, threshold=5.905e+02, percent-clipped=6.0 2023-03-09 23:39:36,276 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-09 23:40:18,837 INFO [train.py:898] (3/4) Epoch 29, batch 1600, loss[loss=0.1468, simple_loss=0.2493, pruned_loss=0.02219, over 18388.00 frames. ], tot_loss[loss=0.1554, simple_loss=0.2465, pruned_loss=0.03213, over 3579907.60 frames. ], batch size: 52, lr: 3.87e-03, grad_scale: 8.0 2023-03-09 23:40:34,689 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103366.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 23:41:18,046 INFO [train.py:898] (3/4) Epoch 29, batch 1650, loss[loss=0.1549, simple_loss=0.2519, pruned_loss=0.029, over 18572.00 frames. ], tot_loss[loss=0.1555, simple_loss=0.2465, pruned_loss=0.03221, over 3575409.04 frames. ], batch size: 54, lr: 3.87e-03, grad_scale: 8.0 2023-03-09 23:41:29,742 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.750e+02 2.322e+02 2.729e+02 3.382e+02 5.618e+02, threshold=5.459e+02, percent-clipped=0.0 2023-03-09 23:41:47,137 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=103427.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 23:42:17,248 INFO [train.py:898] (3/4) Epoch 29, batch 1700, loss[loss=0.1695, simple_loss=0.2605, pruned_loss=0.03922, over 18352.00 frames. ], tot_loss[loss=0.1559, simple_loss=0.2473, pruned_loss=0.03229, over 3584459.74 frames. ], batch size: 56, lr: 3.87e-03, grad_scale: 8.0 2023-03-09 23:42:18,674 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=103454.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:42:58,939 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=103488.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:43:16,073 INFO [train.py:898] (3/4) Epoch 29, batch 1750, loss[loss=0.1554, simple_loss=0.2395, pruned_loss=0.03566, over 18381.00 frames. ], tot_loss[loss=0.1561, simple_loss=0.2475, pruned_loss=0.0324, over 3577745.56 frames. ], batch size: 50, lr: 3.87e-03, grad_scale: 8.0 2023-03-09 23:43:21,058 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=103507.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:43:27,002 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.672e+02 2.484e+02 2.930e+02 3.685e+02 5.975e+02, threshold=5.860e+02, percent-clipped=1.0 2023-03-09 23:43:27,587 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6279, 2.3217, 2.6043, 2.5323, 3.0394, 4.4538, 4.4622, 3.1544], device='cuda:3'), covar=tensor([0.2245, 0.2895, 0.3083, 0.2251, 0.2688, 0.0383, 0.0447, 0.1199], device='cuda:3'), in_proj_covar=tensor([0.0335, 0.0368, 0.0420, 0.0294, 0.0400, 0.0270, 0.0305, 0.0276], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0002], device='cuda:3') 2023-03-09 23:44:04,176 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=103543.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:44:16,099 INFO [train.py:898] (3/4) Epoch 29, batch 1800, loss[loss=0.1308, simple_loss=0.2164, pruned_loss=0.02265, over 18405.00 frames. ], tot_loss[loss=0.1556, simple_loss=0.2466, pruned_loss=0.03228, over 3577070.25 frames. ], batch size: 42, lr: 3.87e-03, grad_scale: 8.0 2023-03-09 23:44:18,470 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=103555.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:44:33,249 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9044, 3.6173, 5.1096, 3.0310, 4.4412, 2.5862, 3.1084, 1.7369], device='cuda:3'), covar=tensor([0.1222, 0.1009, 0.0175, 0.1004, 0.0501, 0.2810, 0.2693, 0.2378], device='cuda:3'), in_proj_covar=tensor([0.0232, 0.0253, 0.0232, 0.0208, 0.0265, 0.0280, 0.0336, 0.0246], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-09 23:45:00,577 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=103591.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:45:15,444 INFO [train.py:898] (3/4) Epoch 29, batch 1850, loss[loss=0.134, simple_loss=0.2154, pruned_loss=0.02626, over 18419.00 frames. ], tot_loss[loss=0.1556, simple_loss=0.2465, pruned_loss=0.03234, over 3569593.82 frames. ], batch size: 43, lr: 3.87e-03, grad_scale: 8.0 2023-03-09 23:45:25,545 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.856e+02 2.569e+02 2.972e+02 3.705e+02 6.711e+02, threshold=5.945e+02, percent-clipped=3.0 2023-03-09 23:46:14,408 INFO [train.py:898] (3/4) Epoch 29, batch 1900, loss[loss=0.136, simple_loss=0.2242, pruned_loss=0.02389, over 18444.00 frames. ], tot_loss[loss=0.1549, simple_loss=0.2458, pruned_loss=0.03201, over 3580920.13 frames. ], batch size: 43, lr: 3.87e-03, grad_scale: 8.0 2023-03-09 23:47:14,057 INFO [train.py:898] (3/4) Epoch 29, batch 1950, loss[loss=0.1711, simple_loss=0.2672, pruned_loss=0.03745, over 18349.00 frames. ], tot_loss[loss=0.1548, simple_loss=0.2455, pruned_loss=0.03201, over 3566239.43 frames. ], batch size: 55, lr: 3.86e-03, grad_scale: 8.0 2023-03-09 23:47:24,470 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.577e+02 2.406e+02 2.960e+02 3.499e+02 1.191e+03, threshold=5.920e+02, percent-clipped=3.0 2023-03-09 23:47:36,654 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=103722.0, num_to_drop=1, layers_to_drop={3} 2023-03-09 23:47:57,873 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103740.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:48:13,374 INFO [train.py:898] (3/4) Epoch 29, batch 2000, loss[loss=0.1541, simple_loss=0.2536, pruned_loss=0.02724, over 18360.00 frames. ], tot_loss[loss=0.155, simple_loss=0.2457, pruned_loss=0.0321, over 3568770.85 frames. ], batch size: 55, lr: 3.86e-03, grad_scale: 8.0 2023-03-09 23:48:14,947 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=103754.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:48:27,209 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103765.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:48:40,280 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7644, 3.9884, 2.3980, 4.0071, 5.1153, 2.7114, 3.8152, 3.8448], device='cuda:3'), covar=tensor([0.0209, 0.1252, 0.1690, 0.0637, 0.0104, 0.1182, 0.0666, 0.0801], device='cuda:3'), in_proj_covar=tensor([0.0188, 0.0286, 0.0212, 0.0203, 0.0147, 0.0188, 0.0225, 0.0235], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-09 23:48:54,499 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=103788.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:49:10,318 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=103801.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:49:11,718 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=103802.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:49:12,642 INFO [train.py:898] (3/4) Epoch 29, batch 2050, loss[loss=0.1721, simple_loss=0.2616, pruned_loss=0.04124, over 18269.00 frames. ], tot_loss[loss=0.1557, simple_loss=0.2464, pruned_loss=0.03248, over 3572765.58 frames. ], batch size: 57, lr: 3.86e-03, grad_scale: 4.0 2023-03-09 23:49:23,959 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.828e+02 2.583e+02 3.001e+02 3.513e+02 6.272e+02, threshold=6.003e+02, percent-clipped=1.0 2023-03-09 23:49:39,243 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=103826.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:49:50,976 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=103836.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:50:10,352 INFO [train.py:898] (3/4) Epoch 29, batch 2100, loss[loss=0.1612, simple_loss=0.2576, pruned_loss=0.03239, over 18480.00 frames. ], tot_loss[loss=0.1558, simple_loss=0.2464, pruned_loss=0.0326, over 3562974.53 frames. ], batch size: 53, lr: 3.86e-03, grad_scale: 4.0 2023-03-09 23:50:38,404 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103876.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:51:04,746 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1527, 5.1136, 4.7900, 5.0544, 5.0619, 4.4881, 4.9673, 4.6966], device='cuda:3'), covar=tensor([0.0415, 0.0520, 0.1289, 0.0784, 0.0624, 0.0448, 0.0487, 0.1337], device='cuda:3'), in_proj_covar=tensor([0.0523, 0.0596, 0.0736, 0.0458, 0.0487, 0.0542, 0.0573, 0.0716], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-09 23:51:09,179 INFO [train.py:898] (3/4) Epoch 29, batch 2150, loss[loss=0.1501, simple_loss=0.2473, pruned_loss=0.02647, over 18495.00 frames. ], tot_loss[loss=0.1556, simple_loss=0.2463, pruned_loss=0.03239, over 3581182.03 frames. ], batch size: 51, lr: 3.86e-03, grad_scale: 4.0 2023-03-09 23:51:21,588 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.841e+02 2.540e+02 2.937e+02 3.473e+02 7.203e+02, threshold=5.874e+02, percent-clipped=1.0 2023-03-09 23:51:50,085 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=103937.0, num_to_drop=1, layers_to_drop={1} 2023-03-09 23:52:07,797 INFO [train.py:898] (3/4) Epoch 29, batch 2200, loss[loss=0.1338, simple_loss=0.2212, pruned_loss=0.02325, over 18564.00 frames. ], tot_loss[loss=0.1553, simple_loss=0.246, pruned_loss=0.03232, over 3583056.01 frames. ], batch size: 45, lr: 3.86e-03, grad_scale: 4.0 2023-03-09 23:52:52,491 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103990.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:53:00,566 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103997.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:53:11,858 INFO [train.py:898] (3/4) Epoch 29, batch 2250, loss[loss=0.1476, simple_loss=0.2344, pruned_loss=0.03035, over 18497.00 frames. ], tot_loss[loss=0.155, simple_loss=0.2456, pruned_loss=0.03222, over 3584957.48 frames. ], batch size: 47, lr: 3.86e-03, grad_scale: 4.0 2023-03-09 23:53:19,271 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5370, 2.8382, 2.6274, 2.8505, 3.6634, 3.5504, 3.0900, 2.8816], device='cuda:3'), covar=tensor([0.0194, 0.0316, 0.0534, 0.0407, 0.0183, 0.0165, 0.0414, 0.0437], device='cuda:3'), in_proj_covar=tensor([0.0150, 0.0150, 0.0169, 0.0168, 0.0145, 0.0132, 0.0164, 0.0168], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0003, 0.0003, 0.0004], device='cuda:3') 2023-03-09 23:53:23,835 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.696e+02 2.605e+02 3.021e+02 3.898e+02 6.777e+02, threshold=6.041e+02, percent-clipped=7.0 2023-03-09 23:53:35,095 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104022.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 23:53:47,731 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-09 23:53:58,971 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4544, 2.8268, 2.5426, 2.8227, 3.5980, 3.4773, 3.0715, 2.8526], device='cuda:3'), covar=tensor([0.0225, 0.0350, 0.0597, 0.0437, 0.0221, 0.0203, 0.0437, 0.0462], device='cuda:3'), in_proj_covar=tensor([0.0149, 0.0149, 0.0168, 0.0168, 0.0145, 0.0132, 0.0164, 0.0167], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0003, 0.0003, 0.0004], device='cuda:3') 2023-03-09 23:54:06,871 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=104049.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:54:09,201 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=104051.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:54:11,028 INFO [train.py:898] (3/4) Epoch 29, batch 2300, loss[loss=0.1495, simple_loss=0.2437, pruned_loss=0.02772, over 18383.00 frames. ], tot_loss[loss=0.1551, simple_loss=0.2459, pruned_loss=0.03219, over 3583597.36 frames. ], batch size: 50, lr: 3.86e-03, grad_scale: 4.0 2023-03-09 23:54:17,187 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=104058.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:54:31,653 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=104070.0, num_to_drop=1, layers_to_drop={0} 2023-03-09 23:54:53,728 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2231, 5.7973, 5.3755, 5.5771, 5.4236, 5.2156, 5.8279, 5.8066], device='cuda:3'), covar=tensor([0.1251, 0.0839, 0.0648, 0.0765, 0.1369, 0.0771, 0.0640, 0.0743], device='cuda:3'), in_proj_covar=tensor([0.0650, 0.0571, 0.0411, 0.0595, 0.0795, 0.0589, 0.0812, 0.0622], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-09 23:55:02,152 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=104096.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:55:09,796 INFO [train.py:898] (3/4) Epoch 29, batch 2350, loss[loss=0.178, simple_loss=0.274, pruned_loss=0.04104, over 17984.00 frames. ], tot_loss[loss=0.1554, simple_loss=0.2463, pruned_loss=0.03221, over 3596994.77 frames. ], batch size: 65, lr: 3.86e-03, grad_scale: 4.0 2023-03-09 23:55:17,807 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=104110.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:55:20,809 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.751e+02 2.516e+02 3.034e+02 3.575e+02 8.102e+02, threshold=6.068e+02, percent-clipped=2.0 2023-03-09 23:55:28,658 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.0823, 5.3460, 2.8412, 5.1849, 4.9488, 5.3383, 5.1475, 2.3824], device='cuda:3'), covar=tensor([0.0227, 0.0102, 0.0900, 0.0098, 0.0101, 0.0108, 0.0122, 0.1552], device='cuda:3'), in_proj_covar=tensor([0.0093, 0.0084, 0.0098, 0.0100, 0.0091, 0.0080, 0.0087, 0.0098], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-09 23:55:31,309 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=104121.0, num_to_drop=0, layers_to_drop=set() 2023-03-09 23:56:08,417 INFO [train.py:898] (3/4) Epoch 29, batch 2400, loss[loss=0.1374, simple_loss=0.2204, pruned_loss=0.02721, over 18365.00 frames. ], tot_loss[loss=0.1554, simple_loss=0.2463, pruned_loss=0.03219, over 3602407.73 frames. ], batch size: 42, lr: 3.86e-03, grad_scale: 8.0 2023-03-09 23:56:47,155 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9594, 5.4919, 5.4812, 5.4740, 4.8947, 5.3891, 4.8826, 5.3721], device='cuda:3'), covar=tensor([0.0230, 0.0237, 0.0175, 0.0377, 0.0380, 0.0205, 0.0928, 0.0272], device='cuda:3'), in_proj_covar=tensor([0.0233, 0.0277, 0.0279, 0.0357, 0.0286, 0.0287, 0.0317, 0.0277], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0008, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-09 23:57:07,526 INFO [train.py:898] (3/4) Epoch 29, batch 2450, loss[loss=0.1433, simple_loss=0.2301, pruned_loss=0.02821, over 18265.00 frames. ], tot_loss[loss=0.1555, simple_loss=0.2466, pruned_loss=0.03226, over 3594159.62 frames. ], batch size: 47, lr: 3.86e-03, grad_scale: 8.0 2023-03-09 23:57:18,499 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.747e+02 2.425e+02 2.919e+02 3.520e+02 8.607e+02, threshold=5.838e+02, percent-clipped=3.0 2023-03-09 23:57:41,262 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=104232.0, num_to_drop=1, layers_to_drop={2} 2023-03-09 23:58:05,927 INFO [train.py:898] (3/4) Epoch 29, batch 2500, loss[loss=0.1487, simple_loss=0.2438, pruned_loss=0.02681, over 18075.00 frames. ], tot_loss[loss=0.1561, simple_loss=0.2469, pruned_loss=0.0326, over 3588503.08 frames. ], batch size: 62, lr: 3.85e-03, grad_scale: 8.0 2023-03-09 23:58:06,222 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.2539, 4.2240, 4.0423, 4.1913, 4.2140, 3.7549, 4.1702, 4.0220], device='cuda:3'), covar=tensor([0.0520, 0.0664, 0.1232, 0.0757, 0.0605, 0.0484, 0.0541, 0.1100], device='cuda:3'), in_proj_covar=tensor([0.0519, 0.0591, 0.0727, 0.0455, 0.0479, 0.0537, 0.0565, 0.0709], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-09 23:58:54,663 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-09 23:59:04,443 INFO [train.py:898] (3/4) Epoch 29, batch 2550, loss[loss=0.1459, simple_loss=0.2344, pruned_loss=0.02875, over 18282.00 frames. ], tot_loss[loss=0.1562, simple_loss=0.247, pruned_loss=0.0327, over 3579514.12 frames. ], batch size: 49, lr: 3.85e-03, grad_scale: 8.0 2023-03-09 23:59:16,572 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.724e+02 2.479e+02 2.941e+02 3.574e+02 6.228e+02, threshold=5.882e+02, percent-clipped=1.0 2023-03-09 23:59:54,413 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3435, 5.2908, 5.6471, 5.6055, 5.2806, 6.1576, 5.7502, 5.3466], device='cuda:3'), covar=tensor([0.1197, 0.0618, 0.0885, 0.0845, 0.1291, 0.0653, 0.0674, 0.1594], device='cuda:3'), in_proj_covar=tensor([0.0381, 0.0310, 0.0339, 0.0342, 0.0348, 0.0453, 0.0309, 0.0446], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-09 23:59:55,465 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=104346.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:00:03,153 INFO [train.py:898] (3/4) Epoch 29, batch 2600, loss[loss=0.1822, simple_loss=0.282, pruned_loss=0.04123, over 18008.00 frames. ], tot_loss[loss=0.1561, simple_loss=0.247, pruned_loss=0.03259, over 3582484.54 frames. ], batch size: 65, lr: 3.85e-03, grad_scale: 8.0 2023-03-10 00:00:03,388 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=104353.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:00:54,695 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104396.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:01:03,268 INFO [train.py:898] (3/4) Epoch 29, batch 2650, loss[loss=0.1585, simple_loss=0.252, pruned_loss=0.03248, over 18326.00 frames. ], tot_loss[loss=0.1559, simple_loss=0.2466, pruned_loss=0.03254, over 3568542.31 frames. ], batch size: 55, lr: 3.85e-03, grad_scale: 8.0 2023-03-10 00:01:05,829 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=104405.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:01:15,459 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.642e+02 2.433e+02 2.852e+02 3.546e+02 8.268e+02, threshold=5.705e+02, percent-clipped=2.0 2023-03-10 00:01:25,045 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104421.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:01:52,785 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=104444.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:02:03,369 INFO [train.py:898] (3/4) Epoch 29, batch 2700, loss[loss=0.1747, simple_loss=0.2642, pruned_loss=0.0426, over 18287.00 frames. ], tot_loss[loss=0.156, simple_loss=0.2468, pruned_loss=0.0326, over 3564410.54 frames. ], batch size: 57, lr: 3.85e-03, grad_scale: 8.0 2023-03-10 00:02:22,290 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=104469.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:03:01,364 INFO [train.py:898] (3/4) Epoch 29, batch 2750, loss[loss=0.1765, simple_loss=0.268, pruned_loss=0.04252, over 17716.00 frames. ], tot_loss[loss=0.1562, simple_loss=0.2469, pruned_loss=0.03269, over 3570120.31 frames. ], batch size: 70, lr: 3.85e-03, grad_scale: 8.0 2023-03-10 00:03:12,790 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.672e+02 2.565e+02 3.025e+02 3.690e+02 7.436e+02, threshold=6.050e+02, percent-clipped=2.0 2023-03-10 00:03:35,852 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104532.0, num_to_drop=1, layers_to_drop={0} 2023-03-10 00:04:00,009 INFO [train.py:898] (3/4) Epoch 29, batch 2800, loss[loss=0.1412, simple_loss=0.2347, pruned_loss=0.02382, over 18311.00 frames. ], tot_loss[loss=0.1558, simple_loss=0.2466, pruned_loss=0.03253, over 3572716.49 frames. ], batch size: 49, lr: 3.85e-03, grad_scale: 8.0 2023-03-10 00:04:31,629 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=104580.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:04:52,173 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6468, 3.5884, 2.4248, 4.4384, 3.2040, 4.2418, 2.7065, 4.0313], device='cuda:3'), covar=tensor([0.0686, 0.0813, 0.1395, 0.0560, 0.0807, 0.0318, 0.1092, 0.0418], device='cuda:3'), in_proj_covar=tensor([0.0224, 0.0234, 0.0196, 0.0300, 0.0198, 0.0273, 0.0209, 0.0210], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-10 00:04:58,832 INFO [train.py:898] (3/4) Epoch 29, batch 2850, loss[loss=0.135, simple_loss=0.2221, pruned_loss=0.0239, over 18347.00 frames. ], tot_loss[loss=0.1561, simple_loss=0.2467, pruned_loss=0.03273, over 3582557.61 frames. ], batch size: 46, lr: 3.85e-03, grad_scale: 8.0 2023-03-10 00:05:10,930 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.658e+02 2.500e+02 2.877e+02 3.433e+02 5.313e+02, threshold=5.755e+02, percent-clipped=0.0 2023-03-10 00:05:50,696 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104646.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:05:58,436 INFO [train.py:898] (3/4) Epoch 29, batch 2900, loss[loss=0.1335, simple_loss=0.2129, pruned_loss=0.02706, over 18401.00 frames. ], tot_loss[loss=0.1556, simple_loss=0.2462, pruned_loss=0.03245, over 3587576.15 frames. ], batch size: 42, lr: 3.85e-03, grad_scale: 8.0 2023-03-10 00:05:58,811 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104653.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:06:41,735 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.10 vs. limit=5.0 2023-03-10 00:06:47,693 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=104694.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:06:56,051 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=104701.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:06:58,034 INFO [train.py:898] (3/4) Epoch 29, batch 2950, loss[loss=0.1701, simple_loss=0.2674, pruned_loss=0.03634, over 18238.00 frames. ], tot_loss[loss=0.1558, simple_loss=0.2468, pruned_loss=0.03243, over 3581425.18 frames. ], batch size: 60, lr: 3.85e-03, grad_scale: 8.0 2023-03-10 00:07:00,701 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104705.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:07:09,434 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.771e+02 2.535e+02 2.972e+02 3.582e+02 1.059e+03, threshold=5.945e+02, percent-clipped=4.0 2023-03-10 00:07:57,859 INFO [train.py:898] (3/4) Epoch 29, batch 3000, loss[loss=0.1485, simple_loss=0.2462, pruned_loss=0.02538, over 18552.00 frames. ], tot_loss[loss=0.1558, simple_loss=0.2465, pruned_loss=0.03252, over 3582767.98 frames. ], batch size: 54, lr: 3.85e-03, grad_scale: 8.0 2023-03-10 00:07:57,859 INFO [train.py:923] (3/4) Computing validation loss 2023-03-10 00:08:10,054 INFO [train.py:932] (3/4) Epoch 29, validation: loss=0.1493, simple_loss=0.2471, pruned_loss=0.02574, over 944034.00 frames. 2023-03-10 00:08:10,055 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-10 00:08:10,336 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=104753.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:08:44,240 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7778, 3.6423, 5.1360, 2.9424, 4.4756, 2.7480, 3.2173, 1.9114], device='cuda:3'), covar=tensor([0.1344, 0.1051, 0.0203, 0.1068, 0.0567, 0.2751, 0.2757, 0.2389], device='cuda:3'), in_proj_covar=tensor([0.0235, 0.0256, 0.0234, 0.0210, 0.0268, 0.0284, 0.0341, 0.0249], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-10 00:09:09,598 INFO [train.py:898] (3/4) Epoch 29, batch 3050, loss[loss=0.1605, simple_loss=0.2485, pruned_loss=0.03629, over 18354.00 frames. ], tot_loss[loss=0.1556, simple_loss=0.2464, pruned_loss=0.03237, over 3591168.61 frames. ], batch size: 46, lr: 3.84e-03, grad_scale: 8.0 2023-03-10 00:09:12,740 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.77 vs. limit=2.0 2023-03-10 00:09:21,243 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.749e+02 2.540e+02 3.003e+02 3.449e+02 8.629e+02, threshold=6.007e+02, percent-clipped=3.0 2023-03-10 00:09:31,369 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7457, 4.1308, 2.3562, 3.8731, 5.1563, 2.7158, 3.8099, 3.8883], device='cuda:3'), covar=tensor([0.0230, 0.1119, 0.1911, 0.0806, 0.0124, 0.1336, 0.0714, 0.0808], device='cuda:3'), in_proj_covar=tensor([0.0189, 0.0287, 0.0213, 0.0204, 0.0148, 0.0188, 0.0225, 0.0235], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-10 00:10:08,507 INFO [train.py:898] (3/4) Epoch 29, batch 3100, loss[loss=0.1388, simple_loss=0.2234, pruned_loss=0.02708, over 18417.00 frames. ], tot_loss[loss=0.156, simple_loss=0.2467, pruned_loss=0.03264, over 3574584.26 frames. ], batch size: 42, lr: 3.84e-03, grad_scale: 8.0 2023-03-10 00:11:07,606 INFO [train.py:898] (3/4) Epoch 29, batch 3150, loss[loss=0.1458, simple_loss=0.239, pruned_loss=0.02627, over 18490.00 frames. ], tot_loss[loss=0.1559, simple_loss=0.2468, pruned_loss=0.03252, over 3587901.29 frames. ], batch size: 51, lr: 3.84e-03, grad_scale: 8.0 2023-03-10 00:11:19,627 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.665e+02 2.384e+02 2.834e+02 3.322e+02 6.284e+02, threshold=5.669e+02, percent-clipped=1.0 2023-03-10 00:12:06,347 INFO [train.py:898] (3/4) Epoch 29, batch 3200, loss[loss=0.142, simple_loss=0.2375, pruned_loss=0.02324, over 18386.00 frames. ], tot_loss[loss=0.1557, simple_loss=0.2463, pruned_loss=0.03258, over 3587859.35 frames. ], batch size: 50, lr: 3.84e-03, grad_scale: 8.0 2023-03-10 00:12:13,525 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8955, 4.5831, 4.5738, 3.4542, 3.7763, 3.5572, 2.8147, 2.7341], device='cuda:3'), covar=tensor([0.0228, 0.0165, 0.0082, 0.0322, 0.0321, 0.0230, 0.0660, 0.0791], device='cuda:3'), in_proj_covar=tensor([0.0076, 0.0064, 0.0070, 0.0072, 0.0092, 0.0071, 0.0079, 0.0087], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006], device='cuda:3') 2023-03-10 00:13:06,490 INFO [train.py:898] (3/4) Epoch 29, batch 3250, loss[loss=0.144, simple_loss=0.2309, pruned_loss=0.02854, over 18497.00 frames. ], tot_loss[loss=0.1553, simple_loss=0.2459, pruned_loss=0.03234, over 3588957.10 frames. ], batch size: 47, lr: 3.84e-03, grad_scale: 8.0 2023-03-10 00:13:17,602 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.817e+02 2.551e+02 3.060e+02 3.706e+02 7.132e+02, threshold=6.121e+02, percent-clipped=2.0 2023-03-10 00:14:05,601 INFO [train.py:898] (3/4) Epoch 29, batch 3300, loss[loss=0.1299, simple_loss=0.2108, pruned_loss=0.02455, over 18437.00 frames. ], tot_loss[loss=0.1551, simple_loss=0.2456, pruned_loss=0.03228, over 3572902.00 frames. ], batch size: 43, lr: 3.84e-03, grad_scale: 8.0 2023-03-10 00:15:04,553 INFO [train.py:898] (3/4) Epoch 29, batch 3350, loss[loss=0.141, simple_loss=0.2337, pruned_loss=0.02412, over 18543.00 frames. ], tot_loss[loss=0.1555, simple_loss=0.2461, pruned_loss=0.03245, over 3563098.39 frames. ], batch size: 49, lr: 3.84e-03, grad_scale: 8.0 2023-03-10 00:15:04,991 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6047, 3.5274, 2.2923, 4.4410, 3.1852, 4.1694, 2.7039, 3.9953], device='cuda:3'), covar=tensor([0.0685, 0.0854, 0.1520, 0.0531, 0.0814, 0.0455, 0.1142, 0.0413], device='cuda:3'), in_proj_covar=tensor([0.0224, 0.0235, 0.0197, 0.0300, 0.0198, 0.0274, 0.0209, 0.0211], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-10 00:15:15,599 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.933e+02 2.604e+02 2.932e+02 3.657e+02 1.346e+03, threshold=5.864e+02, percent-clipped=3.0 2023-03-10 00:15:42,536 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.37 vs. limit=5.0 2023-03-10 00:15:57,272 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-10 00:16:03,631 INFO [train.py:898] (3/4) Epoch 29, batch 3400, loss[loss=0.1433, simple_loss=0.227, pruned_loss=0.02984, over 18246.00 frames. ], tot_loss[loss=0.1554, simple_loss=0.2463, pruned_loss=0.03227, over 3573687.43 frames. ], batch size: 45, lr: 3.84e-03, grad_scale: 8.0 2023-03-10 00:16:04,037 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7097, 4.6154, 2.6191, 4.4788, 4.3980, 4.5658, 4.4355, 2.4665], device='cuda:3'), covar=tensor([0.0286, 0.0101, 0.0901, 0.0132, 0.0102, 0.0130, 0.0140, 0.1234], device='cuda:3'), in_proj_covar=tensor([0.0093, 0.0084, 0.0097, 0.0100, 0.0091, 0.0080, 0.0087, 0.0098], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-10 00:16:35,772 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.74 vs. limit=5.0 2023-03-10 00:17:01,969 INFO [train.py:898] (3/4) Epoch 29, batch 3450, loss[loss=0.1409, simple_loss=0.2227, pruned_loss=0.02959, over 18404.00 frames. ], tot_loss[loss=0.1553, simple_loss=0.2461, pruned_loss=0.03222, over 3572104.00 frames. ], batch size: 43, lr: 3.84e-03, grad_scale: 8.0 2023-03-10 00:17:13,783 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.580e+02 2.383e+02 2.808e+02 3.318e+02 5.892e+02, threshold=5.615e+02, percent-clipped=1.0 2023-03-10 00:18:00,567 INFO [train.py:898] (3/4) Epoch 29, batch 3500, loss[loss=0.164, simple_loss=0.2532, pruned_loss=0.03738, over 18468.00 frames. ], tot_loss[loss=0.1544, simple_loss=0.2453, pruned_loss=0.03172, over 3589935.22 frames. ], batch size: 59, lr: 3.84e-03, grad_scale: 8.0 2023-03-10 00:18:04,967 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7357, 3.6349, 5.0314, 3.0306, 4.3383, 2.5849, 3.2671, 1.7064], device='cuda:3'), covar=tensor([0.1323, 0.1023, 0.0214, 0.0989, 0.0583, 0.2820, 0.2608, 0.2445], device='cuda:3'), in_proj_covar=tensor([0.0233, 0.0255, 0.0234, 0.0209, 0.0267, 0.0283, 0.0339, 0.0249], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-10 00:18:07,019 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2586, 5.2107, 5.5713, 5.6491, 5.1969, 6.0952, 5.7485, 5.3469], device='cuda:3'), covar=tensor([0.1315, 0.0670, 0.0804, 0.0635, 0.1396, 0.0712, 0.0753, 0.1825], device='cuda:3'), in_proj_covar=tensor([0.0377, 0.0309, 0.0338, 0.0341, 0.0345, 0.0452, 0.0305, 0.0445], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-10 00:18:55,797 INFO [train.py:898] (3/4) Epoch 29, batch 3550, loss[loss=0.1566, simple_loss=0.2507, pruned_loss=0.03123, over 18485.00 frames. ], tot_loss[loss=0.1539, simple_loss=0.2448, pruned_loss=0.03153, over 3596055.12 frames. ], batch size: 51, lr: 3.83e-03, grad_scale: 8.0 2023-03-10 00:19:06,454 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.798e+02 2.434e+02 2.865e+02 3.612e+02 9.086e+02, threshold=5.730e+02, percent-clipped=2.0 2023-03-10 00:19:07,841 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.6889, 6.2040, 5.7305, 6.0195, 5.8409, 5.6661, 6.2678, 6.2234], device='cuda:3'), covar=tensor([0.1190, 0.0678, 0.0425, 0.0659, 0.1265, 0.0710, 0.0540, 0.0697], device='cuda:3'), in_proj_covar=tensor([0.0651, 0.0573, 0.0410, 0.0596, 0.0797, 0.0595, 0.0812, 0.0630], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-10 00:19:50,742 INFO [train.py:898] (3/4) Epoch 29, batch 3600, loss[loss=0.1933, simple_loss=0.2778, pruned_loss=0.05445, over 12098.00 frames. ], tot_loss[loss=0.1547, simple_loss=0.2456, pruned_loss=0.03192, over 3581945.72 frames. ], batch size: 130, lr: 3.83e-03, grad_scale: 8.0 2023-03-10 00:20:11,948 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-10 00:20:53,446 INFO [train.py:898] (3/4) Epoch 30, batch 0, loss[loss=0.1514, simple_loss=0.2437, pruned_loss=0.02954, over 18550.00 frames. ], tot_loss[loss=0.1514, simple_loss=0.2437, pruned_loss=0.02954, over 18550.00 frames. ], batch size: 49, lr: 3.77e-03, grad_scale: 8.0 2023-03-10 00:20:53,446 INFO [train.py:923] (3/4) Computing validation loss 2023-03-10 00:21:02,422 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.4585, 4.4361, 4.4997, 4.1365, 4.2614, 4.1955, 4.4961, 4.5460], device='cuda:3'), covar=tensor([0.0073, 0.0055, 0.0059, 0.0123, 0.0073, 0.0152, 0.0066, 0.0069], device='cuda:3'), in_proj_covar=tensor([0.0101, 0.0075, 0.0080, 0.0101, 0.0080, 0.0108, 0.0092, 0.0092], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-10 00:21:05,356 INFO [train.py:932] (3/4) Epoch 30, validation: loss=0.1503, simple_loss=0.2477, pruned_loss=0.02643, over 944034.00 frames. 2023-03-10 00:21:05,357 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-10 00:21:06,961 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=105388.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:21:36,266 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.589e+02 2.529e+02 2.877e+02 3.490e+02 8.878e+02, threshold=5.754e+02, percent-clipped=2.0 2023-03-10 00:21:51,237 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3643, 5.2829, 5.6421, 5.7245, 5.2753, 6.1280, 5.7872, 5.3726], device='cuda:3'), covar=tensor([0.1245, 0.0668, 0.0787, 0.0804, 0.1413, 0.0738, 0.0820, 0.1817], device='cuda:3'), in_proj_covar=tensor([0.0377, 0.0309, 0.0337, 0.0340, 0.0344, 0.0451, 0.0305, 0.0443], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-10 00:22:04,118 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9140, 3.6622, 5.0893, 3.0482, 4.3563, 2.6362, 3.3215, 1.7411], device='cuda:3'), covar=tensor([0.1242, 0.0996, 0.0173, 0.1033, 0.0542, 0.2718, 0.2432, 0.2440], device='cuda:3'), in_proj_covar=tensor([0.0235, 0.0256, 0.0235, 0.0210, 0.0268, 0.0283, 0.0340, 0.0250], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-10 00:22:04,731 INFO [train.py:898] (3/4) Epoch 30, batch 50, loss[loss=0.1744, simple_loss=0.2611, pruned_loss=0.04387, over 18390.00 frames. ], tot_loss[loss=0.1544, simple_loss=0.2459, pruned_loss=0.03141, over 815565.19 frames. ], batch size: 52, lr: 3.77e-03, grad_scale: 4.0 2023-03-10 00:22:18,568 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=105449.0, num_to_drop=1, layers_to_drop={0} 2023-03-10 00:22:40,575 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-10 00:23:03,198 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=105486.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:23:03,941 INFO [train.py:898] (3/4) Epoch 30, batch 100, loss[loss=0.1557, simple_loss=0.2528, pruned_loss=0.02935, over 17825.00 frames. ], tot_loss[loss=0.1538, simple_loss=0.2445, pruned_loss=0.03154, over 1442318.80 frames. ], batch size: 70, lr: 3.77e-03, grad_scale: 2.0 2023-03-10 00:23:21,402 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=105502.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:23:36,855 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.707e+02 2.490e+02 2.952e+02 3.451e+02 6.193e+02, threshold=5.904e+02, percent-clipped=1.0 2023-03-10 00:23:45,340 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6665, 3.1573, 4.4886, 3.6812, 2.6678, 4.7173, 3.9471, 3.1506], device='cuda:3'), covar=tensor([0.0563, 0.1267, 0.0289, 0.0512, 0.1522, 0.0237, 0.0551, 0.0899], device='cuda:3'), in_proj_covar=tensor([0.0225, 0.0248, 0.0239, 0.0176, 0.0228, 0.0223, 0.0261, 0.0201], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-10 00:24:02,926 INFO [train.py:898] (3/4) Epoch 30, batch 150, loss[loss=0.2213, simple_loss=0.2947, pruned_loss=0.07389, over 12621.00 frames. ], tot_loss[loss=0.155, simple_loss=0.2453, pruned_loss=0.03231, over 1906781.56 frames. ], batch size: 129, lr: 3.77e-03, grad_scale: 2.0 2023-03-10 00:24:04,306 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4621, 5.3500, 5.7442, 5.8727, 5.3096, 6.2701, 5.9820, 5.4460], device='cuda:3'), covar=tensor([0.1123, 0.0646, 0.0794, 0.0740, 0.1466, 0.0650, 0.0619, 0.1774], device='cuda:3'), in_proj_covar=tensor([0.0377, 0.0309, 0.0336, 0.0340, 0.0344, 0.0451, 0.0305, 0.0443], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-10 00:24:14,879 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=105547.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:24:18,580 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.79 vs. limit=2.0 2023-03-10 00:24:33,625 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=105563.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:25:02,151 INFO [train.py:898] (3/4) Epoch 30, batch 200, loss[loss=0.1863, simple_loss=0.2743, pruned_loss=0.04913, over 12399.00 frames. ], tot_loss[loss=0.1547, simple_loss=0.2452, pruned_loss=0.03214, over 2269986.85 frames. ], batch size: 129, lr: 3.76e-03, grad_scale: 2.0 2023-03-10 00:25:34,557 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.818e+02 2.661e+02 3.049e+02 3.613e+02 6.015e+02, threshold=6.098e+02, percent-clipped=1.0 2023-03-10 00:25:53,864 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-10 00:26:01,254 INFO [train.py:898] (3/4) Epoch 30, batch 250, loss[loss=0.1622, simple_loss=0.2512, pruned_loss=0.03662, over 18081.00 frames. ], tot_loss[loss=0.1558, simple_loss=0.2469, pruned_loss=0.03242, over 2567579.47 frames. ], batch size: 62, lr: 3.76e-03, grad_scale: 2.0 2023-03-10 00:26:16,621 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-10 00:26:29,618 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8179, 5.3860, 2.6142, 5.2175, 5.0926, 5.3887, 5.2105, 2.5588], device='cuda:3'), covar=tensor([0.0282, 0.0066, 0.0868, 0.0080, 0.0094, 0.0077, 0.0093, 0.1082], device='cuda:3'), in_proj_covar=tensor([0.0093, 0.0083, 0.0097, 0.0099, 0.0090, 0.0080, 0.0087, 0.0098], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004, 0.0005], device='cuda:3') 2023-03-10 00:26:56,604 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.2565, 2.3568, 3.7511, 3.4229, 2.2479, 3.8949, 3.4592, 2.5600], device='cuda:3'), covar=tensor([0.0609, 0.1930, 0.0420, 0.0464, 0.2054, 0.0332, 0.0720, 0.1240], device='cuda:3'), in_proj_covar=tensor([0.0225, 0.0247, 0.0239, 0.0176, 0.0227, 0.0222, 0.0261, 0.0201], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-10 00:26:59,687 INFO [train.py:898] (3/4) Epoch 30, batch 300, loss[loss=0.1458, simple_loss=0.2391, pruned_loss=0.02627, over 18419.00 frames. ], tot_loss[loss=0.1551, simple_loss=0.246, pruned_loss=0.03205, over 2791706.66 frames. ], batch size: 48, lr: 3.76e-03, grad_scale: 2.0 2023-03-10 00:27:07,419 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6032, 3.1335, 4.4781, 3.7214, 2.6903, 4.6714, 3.9280, 3.0692], device='cuda:3'), covar=tensor([0.0580, 0.1359, 0.0248, 0.0467, 0.1603, 0.0240, 0.0562, 0.0927], device='cuda:3'), in_proj_covar=tensor([0.0225, 0.0247, 0.0239, 0.0176, 0.0228, 0.0223, 0.0261, 0.0201], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-10 00:27:32,634 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.857e+02 2.459e+02 2.793e+02 3.329e+02 5.201e+02, threshold=5.586e+02, percent-clipped=0.0 2023-03-10 00:27:58,847 INFO [train.py:898] (3/4) Epoch 30, batch 350, loss[loss=0.1354, simple_loss=0.2157, pruned_loss=0.02757, over 18412.00 frames. ], tot_loss[loss=0.1539, simple_loss=0.2446, pruned_loss=0.03154, over 2961854.68 frames. ], batch size: 43, lr: 3.76e-03, grad_scale: 2.0 2023-03-10 00:28:08,223 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=105744.0, num_to_drop=1, layers_to_drop={2} 2023-03-10 00:28:57,679 INFO [train.py:898] (3/4) Epoch 30, batch 400, loss[loss=0.1654, simple_loss=0.2522, pruned_loss=0.0393, over 15966.00 frames. ], tot_loss[loss=0.154, simple_loss=0.2449, pruned_loss=0.03155, over 3102713.18 frames. ], batch size: 94, lr: 3.76e-03, grad_scale: 4.0 2023-03-10 00:29:03,616 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=105792.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:29:12,529 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=105799.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:29:30,415 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.921e+02 2.502e+02 2.978e+02 3.750e+02 7.883e+02, threshold=5.955e+02, percent-clipped=2.0 2023-03-10 00:29:55,602 INFO [train.py:898] (3/4) Epoch 30, batch 450, loss[loss=0.1697, simple_loss=0.2637, pruned_loss=0.03779, over 18050.00 frames. ], tot_loss[loss=0.1544, simple_loss=0.2452, pruned_loss=0.03178, over 3211737.38 frames. ], batch size: 62, lr: 3.76e-03, grad_scale: 4.0 2023-03-10 00:30:01,546 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=105842.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:30:15,335 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=105853.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:30:20,942 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=105858.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:30:23,547 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=105860.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:30:54,157 INFO [train.py:898] (3/4) Epoch 30, batch 500, loss[loss=0.1596, simple_loss=0.2525, pruned_loss=0.03333, over 18345.00 frames. ], tot_loss[loss=0.1543, simple_loss=0.2451, pruned_loss=0.03177, over 3301480.30 frames. ], batch size: 56, lr: 3.76e-03, grad_scale: 4.0 2023-03-10 00:31:21,520 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3567, 5.3353, 5.0194, 5.3012, 5.2793, 4.7172, 5.2045, 4.9302], device='cuda:3'), covar=tensor([0.0433, 0.0440, 0.1298, 0.0693, 0.0560, 0.0449, 0.0431, 0.1178], device='cuda:3'), in_proj_covar=tensor([0.0519, 0.0588, 0.0725, 0.0456, 0.0491, 0.0535, 0.0566, 0.0707], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-10 00:31:25,881 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.730e+02 2.387e+02 2.835e+02 3.555e+02 6.672e+02, threshold=5.669e+02, percent-clipped=2.0 2023-03-10 00:31:51,022 INFO [train.py:898] (3/4) Epoch 30, batch 550, loss[loss=0.1377, simple_loss=0.2237, pruned_loss=0.02581, over 18180.00 frames. ], tot_loss[loss=0.1547, simple_loss=0.2455, pruned_loss=0.03197, over 3360399.73 frames. ], batch size: 44, lr: 3.76e-03, grad_scale: 4.0 2023-03-10 00:32:20,209 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9837, 5.1153, 5.1631, 4.8320, 4.8146, 4.8226, 5.2228, 5.2447], device='cuda:3'), covar=tensor([0.0076, 0.0068, 0.0067, 0.0128, 0.0070, 0.0201, 0.0082, 0.0092], device='cuda:3'), in_proj_covar=tensor([0.0103, 0.0077, 0.0082, 0.0103, 0.0082, 0.0112, 0.0094, 0.0094], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-10 00:32:47,910 INFO [train.py:898] (3/4) Epoch 30, batch 600, loss[loss=0.1301, simple_loss=0.2149, pruned_loss=0.02265, over 18306.00 frames. ], tot_loss[loss=0.1548, simple_loss=0.2457, pruned_loss=0.03197, over 3411628.32 frames. ], batch size: 49, lr: 3.76e-03, grad_scale: 4.0 2023-03-10 00:33:26,794 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.547e+02 2.468e+02 2.895e+02 3.494e+02 7.378e+02, threshold=5.790e+02, percent-clipped=2.0 2023-03-10 00:33:51,620 INFO [train.py:898] (3/4) Epoch 30, batch 650, loss[loss=0.1715, simple_loss=0.2693, pruned_loss=0.03684, over 18366.00 frames. ], tot_loss[loss=0.1552, simple_loss=0.2459, pruned_loss=0.03219, over 3431233.94 frames. ], batch size: 56, lr: 3.76e-03, grad_scale: 4.0 2023-03-10 00:34:00,495 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106044.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:34:29,832 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-10 00:34:38,602 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=106077.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:34:49,505 INFO [train.py:898] (3/4) Epoch 30, batch 700, loss[loss=0.1508, simple_loss=0.2498, pruned_loss=0.02591, over 18209.00 frames. ], tot_loss[loss=0.1556, simple_loss=0.2466, pruned_loss=0.03232, over 3460629.97 frames. ], batch size: 60, lr: 3.76e-03, grad_scale: 4.0 2023-03-10 00:34:55,340 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=106092.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:35:14,069 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-10 00:35:22,427 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.614e+02 2.517e+02 2.950e+02 3.495e+02 6.202e+02, threshold=5.900e+02, percent-clipped=1.0 2023-03-10 00:35:48,487 INFO [train.py:898] (3/4) Epoch 30, batch 750, loss[loss=0.1637, simple_loss=0.2586, pruned_loss=0.03439, over 18473.00 frames. ], tot_loss[loss=0.1553, simple_loss=0.2463, pruned_loss=0.03218, over 3488531.20 frames. ], batch size: 51, lr: 3.75e-03, grad_scale: 4.0 2023-03-10 00:35:50,034 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=106138.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:35:54,442 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106142.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:36:02,106 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=106148.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:36:09,971 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=106155.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:36:13,464 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106158.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:36:16,813 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7365, 3.0280, 4.4593, 3.7362, 2.7553, 4.6614, 3.9122, 2.9964], device='cuda:3'), covar=tensor([0.0521, 0.1413, 0.0310, 0.0490, 0.1539, 0.0226, 0.0656, 0.0915], device='cuda:3'), in_proj_covar=tensor([0.0224, 0.0246, 0.0239, 0.0174, 0.0225, 0.0220, 0.0260, 0.0200], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-10 00:36:46,890 INFO [train.py:898] (3/4) Epoch 30, batch 800, loss[loss=0.1437, simple_loss=0.2353, pruned_loss=0.02604, over 18403.00 frames. ], tot_loss[loss=0.1549, simple_loss=0.2458, pruned_loss=0.03198, over 3508695.90 frames. ], batch size: 48, lr: 3.75e-03, grad_scale: 8.0 2023-03-10 00:36:50,263 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=106190.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:36:50,460 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5982, 4.3674, 4.3180, 3.2758, 3.5650, 3.3564, 2.5590, 2.3597], device='cuda:3'), covar=tensor([0.0252, 0.0153, 0.0105, 0.0340, 0.0353, 0.0273, 0.0696, 0.0873], device='cuda:3'), in_proj_covar=tensor([0.0078, 0.0065, 0.0072, 0.0074, 0.0094, 0.0072, 0.0081, 0.0089], device='cuda:3'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006], device='cuda:3') 2023-03-10 00:37:09,744 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=106206.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:37:19,673 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.545e+02 2.490e+02 2.877e+02 3.308e+02 6.408e+02, threshold=5.753e+02, percent-clipped=1.0 2023-03-10 00:37:32,573 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7285, 4.8187, 4.1145, 4.6835, 4.7989, 4.2849, 4.6000, 4.3086], device='cuda:3'), covar=tensor([0.0954, 0.0865, 0.2563, 0.1283, 0.0853, 0.0632, 0.0910, 0.1549], device='cuda:3'), in_proj_covar=tensor([0.0525, 0.0598, 0.0737, 0.0464, 0.0497, 0.0544, 0.0575, 0.0718], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-10 00:37:45,836 INFO [train.py:898] (3/4) Epoch 30, batch 850, loss[loss=0.1343, simple_loss=0.2167, pruned_loss=0.02599, over 18099.00 frames. ], tot_loss[loss=0.1553, simple_loss=0.2461, pruned_loss=0.03226, over 3529648.91 frames. ], batch size: 40, lr: 3.75e-03, grad_scale: 8.0 2023-03-10 00:38:21,709 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.3637, 2.6365, 4.0424, 3.5117, 2.5455, 4.2049, 3.7353, 2.7349], device='cuda:3'), covar=tensor([0.0598, 0.1610, 0.0350, 0.0458, 0.1596, 0.0255, 0.0590, 0.1018], device='cuda:3'), in_proj_covar=tensor([0.0223, 0.0245, 0.0238, 0.0173, 0.0225, 0.0220, 0.0259, 0.0199], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-10 00:38:21,724 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=106268.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:38:42,939 INFO [train.py:898] (3/4) Epoch 30, batch 900, loss[loss=0.1415, simple_loss=0.2276, pruned_loss=0.02773, over 18551.00 frames. ], tot_loss[loss=0.1559, simple_loss=0.2469, pruned_loss=0.03251, over 3530697.37 frames. ], batch size: 49, lr: 3.75e-03, grad_scale: 8.0 2023-03-10 00:39:01,617 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7684, 4.5603, 4.4748, 3.4745, 3.7943, 3.5520, 2.9089, 2.4534], device='cuda:3'), covar=tensor([0.0243, 0.0138, 0.0110, 0.0325, 0.0311, 0.0221, 0.0631, 0.0935], device='cuda:3'), in_proj_covar=tensor([0.0077, 0.0065, 0.0072, 0.0073, 0.0094, 0.0072, 0.0081, 0.0088], device='cuda:3'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006], device='cuda:3') 2023-03-10 00:39:15,254 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.738e+02 2.494e+02 3.069e+02 3.732e+02 1.058e+03, threshold=6.138e+02, percent-clipped=4.0 2023-03-10 00:39:27,455 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-10 00:39:31,438 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=106329.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:39:39,804 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=106336.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:39:40,561 INFO [train.py:898] (3/4) Epoch 30, batch 950, loss[loss=0.16, simple_loss=0.2552, pruned_loss=0.03244, over 18289.00 frames. ], tot_loss[loss=0.1556, simple_loss=0.2468, pruned_loss=0.03221, over 3546977.65 frames. ], batch size: 54, lr: 3.75e-03, grad_scale: 8.0 2023-03-10 00:39:40,942 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6241, 3.1515, 4.4802, 3.6765, 2.6845, 4.6449, 4.0239, 2.8164], device='cuda:3'), covar=tensor([0.0561, 0.1312, 0.0291, 0.0507, 0.1635, 0.0239, 0.0515, 0.1027], device='cuda:3'), in_proj_covar=tensor([0.0223, 0.0245, 0.0238, 0.0173, 0.0225, 0.0220, 0.0259, 0.0199], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-10 00:40:39,176 INFO [train.py:898] (3/4) Epoch 30, batch 1000, loss[loss=0.171, simple_loss=0.2568, pruned_loss=0.04258, over 18285.00 frames. ], tot_loss[loss=0.1557, simple_loss=0.2469, pruned_loss=0.03222, over 3559065.87 frames. ], batch size: 47, lr: 3.75e-03, grad_scale: 8.0 2023-03-10 00:40:50,705 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=106397.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:41:00,969 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.1598, 4.5874, 4.3492, 4.4249, 4.1890, 4.8495, 4.5421, 4.1694], device='cuda:3'), covar=tensor([0.1385, 0.1101, 0.0950, 0.0864, 0.1515, 0.1020, 0.0800, 0.1724], device='cuda:3'), in_proj_covar=tensor([0.0378, 0.0308, 0.0337, 0.0339, 0.0345, 0.0448, 0.0305, 0.0446], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-10 00:41:06,636 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.63 vs. limit=2.0 2023-03-10 00:41:11,529 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.702e+02 2.421e+02 2.824e+02 3.495e+02 5.852e+02, threshold=5.647e+02, percent-clipped=0.0 2023-03-10 00:41:11,798 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2755, 5.2907, 5.5702, 5.6331, 5.2288, 6.1196, 5.7327, 5.3391], device='cuda:3'), covar=tensor([0.1254, 0.0635, 0.0810, 0.0743, 0.1618, 0.0752, 0.0701, 0.1689], device='cuda:3'), in_proj_covar=tensor([0.0378, 0.0308, 0.0337, 0.0339, 0.0345, 0.0449, 0.0306, 0.0446], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-10 00:41:20,519 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8853, 4.9118, 4.9767, 4.7046, 4.7327, 4.7713, 5.0755, 5.0663], device='cuda:3'), covar=tensor([0.0082, 0.0077, 0.0067, 0.0122, 0.0077, 0.0163, 0.0071, 0.0095], device='cuda:3'), in_proj_covar=tensor([0.0104, 0.0077, 0.0083, 0.0104, 0.0083, 0.0113, 0.0095, 0.0095], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-10 00:41:32,640 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=106433.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:41:37,595 INFO [train.py:898] (3/4) Epoch 30, batch 1050, loss[loss=0.1536, simple_loss=0.251, pruned_loss=0.02809, over 18549.00 frames. ], tot_loss[loss=0.1552, simple_loss=0.2468, pruned_loss=0.03186, over 3563476.39 frames. ], batch size: 49, lr: 3.75e-03, grad_scale: 8.0 2023-03-10 00:41:51,048 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106448.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:41:57,879 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=106454.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:41:58,929 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106455.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:42:23,878 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7782, 3.1791, 4.5653, 3.8816, 2.7153, 4.7983, 4.1062, 3.2042], device='cuda:3'), covar=tensor([0.0529, 0.1359, 0.0294, 0.0477, 0.1584, 0.0216, 0.0575, 0.0882], device='cuda:3'), in_proj_covar=tensor([0.0223, 0.0245, 0.0237, 0.0173, 0.0225, 0.0220, 0.0258, 0.0199], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-10 00:42:33,719 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8846, 3.8014, 5.4303, 3.4902, 4.7136, 2.7885, 3.3878, 2.0088], device='cuda:3'), covar=tensor([0.1249, 0.0913, 0.0136, 0.0829, 0.0472, 0.2437, 0.2531, 0.2213], device='cuda:3'), in_proj_covar=tensor([0.0236, 0.0257, 0.0237, 0.0211, 0.0269, 0.0285, 0.0342, 0.0251], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-10 00:42:35,945 INFO [train.py:898] (3/4) Epoch 30, batch 1100, loss[loss=0.1542, simple_loss=0.2541, pruned_loss=0.02712, over 18579.00 frames. ], tot_loss[loss=0.155, simple_loss=0.2463, pruned_loss=0.03181, over 3558221.52 frames. ], batch size: 54, lr: 3.75e-03, grad_scale: 8.0 2023-03-10 00:42:36,448 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9514, 3.8277, 5.4557, 3.4169, 4.7940, 2.7557, 3.3343, 2.0929], device='cuda:3'), covar=tensor([0.1232, 0.1034, 0.0148, 0.0826, 0.0423, 0.2629, 0.2610, 0.2276], device='cuda:3'), in_proj_covar=tensor([0.0236, 0.0257, 0.0237, 0.0211, 0.0269, 0.0285, 0.0342, 0.0251], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-10 00:42:46,807 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=106496.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:42:54,828 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=106503.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:43:08,576 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.832e+02 2.448e+02 2.852e+02 3.322e+02 5.869e+02, threshold=5.705e+02, percent-clipped=1.0 2023-03-10 00:43:09,026 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=106515.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:43:33,607 INFO [train.py:898] (3/4) Epoch 30, batch 1150, loss[loss=0.1795, simple_loss=0.2695, pruned_loss=0.04476, over 18141.00 frames. ], tot_loss[loss=0.1554, simple_loss=0.2468, pruned_loss=0.03195, over 3570345.26 frames. ], batch size: 62, lr: 3.75e-03, grad_scale: 8.0 2023-03-10 00:43:51,975 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7311, 3.7499, 5.4531, 3.5129, 4.7568, 2.6892, 3.2054, 1.9696], device='cuda:3'), covar=tensor([0.1281, 0.1001, 0.0118, 0.0734, 0.0425, 0.2602, 0.2622, 0.2273], device='cuda:3'), in_proj_covar=tensor([0.0236, 0.0257, 0.0236, 0.0211, 0.0269, 0.0284, 0.0342, 0.0251], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-10 00:44:32,025 INFO [train.py:898] (3/4) Epoch 30, batch 1200, loss[loss=0.1382, simple_loss=0.2235, pruned_loss=0.02645, over 18504.00 frames. ], tot_loss[loss=0.1546, simple_loss=0.2457, pruned_loss=0.03172, over 3572530.48 frames. ], batch size: 47, lr: 3.75e-03, grad_scale: 8.0 2023-03-10 00:45:04,656 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.878e+02 2.700e+02 3.058e+02 3.812e+02 6.890e+02, threshold=6.116e+02, percent-clipped=3.0 2023-03-10 00:45:15,360 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=106624.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:45:30,331 INFO [train.py:898] (3/4) Epoch 30, batch 1250, loss[loss=0.1342, simple_loss=0.2216, pruned_loss=0.02339, over 18440.00 frames. ], tot_loss[loss=0.1545, simple_loss=0.2456, pruned_loss=0.03174, over 3578129.57 frames. ], batch size: 43, lr: 3.75e-03, grad_scale: 8.0 2023-03-10 00:45:55,856 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9922, 4.7649, 4.8154, 3.7443, 3.9951, 3.7120, 2.9663, 2.8760], device='cuda:3'), covar=tensor([0.0231, 0.0151, 0.0076, 0.0289, 0.0290, 0.0211, 0.0633, 0.0752], device='cuda:3'), in_proj_covar=tensor([0.0077, 0.0064, 0.0071, 0.0073, 0.0092, 0.0071, 0.0079, 0.0087], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006], device='cuda:3') 2023-03-10 00:46:00,805 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-10 00:46:28,924 INFO [train.py:898] (3/4) Epoch 30, batch 1300, loss[loss=0.1544, simple_loss=0.2298, pruned_loss=0.03947, over 17608.00 frames. ], tot_loss[loss=0.1545, simple_loss=0.2454, pruned_loss=0.0318, over 3581822.51 frames. ], batch size: 39, lr: 3.74e-03, grad_scale: 8.0 2023-03-10 00:46:34,902 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=106692.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:46:44,330 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5441, 2.7203, 2.5283, 2.8260, 3.5452, 3.4846, 3.0726, 2.8001], device='cuda:3'), covar=tensor([0.0218, 0.0338, 0.0627, 0.0453, 0.0241, 0.0212, 0.0419, 0.0427], device='cuda:3'), in_proj_covar=tensor([0.0150, 0.0152, 0.0170, 0.0170, 0.0146, 0.0133, 0.0165, 0.0168], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0003, 0.0003, 0.0004], device='cuda:3') 2023-03-10 00:47:01,416 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.810e+02 2.359e+02 2.806e+02 3.557e+02 5.057e+02, threshold=5.611e+02, percent-clipped=0.0 2023-03-10 00:47:22,134 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106733.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:47:26,897 INFO [train.py:898] (3/4) Epoch 30, batch 1350, loss[loss=0.1367, simple_loss=0.222, pruned_loss=0.02575, over 17655.00 frames. ], tot_loss[loss=0.1545, simple_loss=0.2459, pruned_loss=0.03153, over 3593468.00 frames. ], batch size: 39, lr: 3.74e-03, grad_scale: 8.0 2023-03-10 00:47:27,419 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6954, 2.4780, 2.6240, 2.7550, 3.1799, 4.8158, 4.9199, 3.4329], device='cuda:3'), covar=tensor([0.2225, 0.2820, 0.3335, 0.2056, 0.2723, 0.0335, 0.0304, 0.1058], device='cuda:3'), in_proj_covar=tensor([0.0339, 0.0367, 0.0422, 0.0294, 0.0402, 0.0272, 0.0305, 0.0278], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0002], device='cuda:3') 2023-03-10 00:48:18,859 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=106781.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:48:25,445 INFO [train.py:898] (3/4) Epoch 30, batch 1400, loss[loss=0.1312, simple_loss=0.2168, pruned_loss=0.02284, over 18257.00 frames. ], tot_loss[loss=0.1546, simple_loss=0.2459, pruned_loss=0.03164, over 3592263.22 frames. ], batch size: 47, lr: 3.74e-03, grad_scale: 8.0 2023-03-10 00:48:45,057 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-10 00:48:52,881 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=106810.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:48:58,343 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.725e+02 2.645e+02 3.103e+02 3.852e+02 9.390e+02, threshold=6.206e+02, percent-clipped=1.0 2023-03-10 00:49:00,945 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7951, 4.7567, 4.4933, 4.6888, 4.7234, 4.2058, 4.6608, 4.4673], device='cuda:3'), covar=tensor([0.0435, 0.0501, 0.1238, 0.0729, 0.0593, 0.0439, 0.0455, 0.1069], device='cuda:3'), in_proj_covar=tensor([0.0529, 0.0601, 0.0742, 0.0465, 0.0500, 0.0546, 0.0580, 0.0724], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0005, 0.0005, 0.0006], device='cuda:3') 2023-03-10 00:49:23,865 INFO [train.py:898] (3/4) Epoch 30, batch 1450, loss[loss=0.1454, simple_loss=0.225, pruned_loss=0.03292, over 18470.00 frames. ], tot_loss[loss=0.1536, simple_loss=0.2447, pruned_loss=0.03126, over 3598929.87 frames. ], batch size: 44, lr: 3.74e-03, grad_scale: 8.0 2023-03-10 00:49:54,992 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7561, 4.0891, 2.4878, 3.9344, 5.1830, 2.5568, 3.7678, 3.9945], device='cuda:3'), covar=tensor([0.0266, 0.1250, 0.1597, 0.0659, 0.0097, 0.1241, 0.0715, 0.0710], device='cuda:3'), in_proj_covar=tensor([0.0188, 0.0286, 0.0212, 0.0204, 0.0148, 0.0187, 0.0225, 0.0233], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-10 00:50:22,117 INFO [train.py:898] (3/4) Epoch 30, batch 1500, loss[loss=0.1417, simple_loss=0.2239, pruned_loss=0.02975, over 18440.00 frames. ], tot_loss[loss=0.1537, simple_loss=0.2449, pruned_loss=0.03126, over 3594992.51 frames. ], batch size: 43, lr: 3.74e-03, grad_scale: 8.0 2023-03-10 00:50:55,410 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.852e+02 2.528e+02 2.938e+02 3.558e+02 8.053e+02, threshold=5.876e+02, percent-clipped=1.0 2023-03-10 00:51:05,841 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106924.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:51:20,623 INFO [train.py:898] (3/4) Epoch 30, batch 1550, loss[loss=0.1575, simple_loss=0.249, pruned_loss=0.033, over 18391.00 frames. ], tot_loss[loss=0.1537, simple_loss=0.245, pruned_loss=0.03117, over 3595351.49 frames. ], batch size: 56, lr: 3.74e-03, grad_scale: 8.0 2023-03-10 00:51:30,534 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8426, 3.4459, 4.4787, 4.0112, 3.1114, 2.8417, 4.0198, 4.6754], device='cuda:3'), covar=tensor([0.0747, 0.1393, 0.0312, 0.0501, 0.1041, 0.1282, 0.0485, 0.0360], device='cuda:3'), in_proj_covar=tensor([0.0158, 0.0289, 0.0181, 0.0191, 0.0202, 0.0199, 0.0207, 0.0221], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-10 00:52:01,403 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=106972.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:52:18,620 INFO [train.py:898] (3/4) Epoch 30, batch 1600, loss[loss=0.1372, simple_loss=0.2227, pruned_loss=0.02587, over 18411.00 frames. ], tot_loss[loss=0.1538, simple_loss=0.2451, pruned_loss=0.03125, over 3599470.14 frames. ], batch size: 48, lr: 3.74e-03, grad_scale: 8.0 2023-03-10 00:52:20,150 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8321, 3.6582, 2.2894, 4.5995, 3.2968, 4.0055, 2.5507, 3.9946], device='cuda:3'), covar=tensor([0.0537, 0.0791, 0.1512, 0.0489, 0.0781, 0.0446, 0.1380, 0.0524], device='cuda:3'), in_proj_covar=tensor([0.0224, 0.0233, 0.0197, 0.0300, 0.0199, 0.0272, 0.0208, 0.0210], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-10 00:52:24,697 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106992.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:52:50,789 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.531e+02 2.519e+02 3.033e+02 3.655e+02 1.046e+03, threshold=6.066e+02, percent-clipped=4.0 2023-03-10 00:53:16,402 INFO [train.py:898] (3/4) Epoch 30, batch 1650, loss[loss=0.1427, simple_loss=0.24, pruned_loss=0.02267, over 16034.00 frames. ], tot_loss[loss=0.1536, simple_loss=0.2446, pruned_loss=0.03126, over 3599782.71 frames. ], batch size: 94, lr: 3.74e-03, grad_scale: 8.0 2023-03-10 00:53:20,497 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=107040.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:53:53,732 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.33 vs. limit=5.0 2023-03-10 00:54:02,497 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5656, 4.1291, 4.0749, 3.2914, 3.4848, 3.2324, 2.4858, 2.3999], device='cuda:3'), covar=tensor([0.0253, 0.0170, 0.0114, 0.0342, 0.0345, 0.0255, 0.0755, 0.0870], device='cuda:3'), in_proj_covar=tensor([0.0077, 0.0064, 0.0071, 0.0073, 0.0093, 0.0071, 0.0080, 0.0088], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006], device='cuda:3') 2023-03-10 00:54:14,380 INFO [train.py:898] (3/4) Epoch 30, batch 1700, loss[loss=0.1554, simple_loss=0.2555, pruned_loss=0.02762, over 18307.00 frames. ], tot_loss[loss=0.1545, simple_loss=0.2459, pruned_loss=0.03152, over 3601791.15 frames. ], batch size: 54, lr: 3.74e-03, grad_scale: 8.0 2023-03-10 00:54:19,908 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8194, 3.4050, 4.4554, 2.8765, 3.9463, 2.5757, 2.8504, 1.9665], device='cuda:3'), covar=tensor([0.1280, 0.1075, 0.0257, 0.1024, 0.0580, 0.2687, 0.2730, 0.2351], device='cuda:3'), in_proj_covar=tensor([0.0236, 0.0259, 0.0238, 0.0212, 0.0271, 0.0286, 0.0344, 0.0253], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-10 00:54:41,468 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=107110.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:54:46,860 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.859e+02 2.555e+02 2.944e+02 3.831e+02 7.585e+02, threshold=5.887e+02, percent-clipped=1.0 2023-03-10 00:55:12,556 INFO [train.py:898] (3/4) Epoch 30, batch 1750, loss[loss=0.1647, simple_loss=0.2559, pruned_loss=0.03671, over 18478.00 frames. ], tot_loss[loss=0.1549, simple_loss=0.2464, pruned_loss=0.03171, over 3578687.50 frames. ], batch size: 59, lr: 3.74e-03, grad_scale: 8.0 2023-03-10 00:55:17,026 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=107140.0, num_to_drop=1, layers_to_drop={0} 2023-03-10 00:55:30,589 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=107152.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:55:37,719 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=107158.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 00:56:12,003 INFO [train.py:898] (3/4) Epoch 30, batch 1800, loss[loss=0.1461, simple_loss=0.2324, pruned_loss=0.02991, over 18374.00 frames. ], tot_loss[loss=0.1546, simple_loss=0.2462, pruned_loss=0.03147, over 3591685.55 frames. ], batch size: 50, lr: 3.74e-03, grad_scale: 8.0 2023-03-10 00:56:29,000 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=107201.0, num_to_drop=1, layers_to_drop={2} 2023-03-10 00:56:43,077 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=107213.0, num_to_drop=1, layers_to_drop={1} 2023-03-10 00:56:44,846 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.514e+02 2.402e+02 2.925e+02 3.436e+02 5.429e+02, threshold=5.850e+02, percent-clipped=0.0 2023-03-10 00:57:09,815 INFO [train.py:898] (3/4) Epoch 30, batch 1850, loss[loss=0.1582, simple_loss=0.2535, pruned_loss=0.03139, over 18475.00 frames. ], tot_loss[loss=0.1549, simple_loss=0.2462, pruned_loss=0.03184, over 3594230.78 frames. ], batch size: 53, lr: 3.74e-03, grad_scale: 8.0 2023-03-10 00:57:21,899 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.6351, 5.5852, 5.2310, 5.5725, 5.5103, 4.9528, 5.4287, 5.1762], device='cuda:3'), covar=tensor([0.0395, 0.0426, 0.1261, 0.0733, 0.0608, 0.0381, 0.0416, 0.1139], device='cuda:3'), in_proj_covar=tensor([0.0525, 0.0595, 0.0738, 0.0464, 0.0498, 0.0541, 0.0573, 0.0718], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0004, 0.0005, 0.0004, 0.0005, 0.0006], device='cuda:3') 2023-03-10 00:58:08,628 INFO [train.py:898] (3/4) Epoch 30, batch 1900, loss[loss=0.1745, simple_loss=0.2672, pruned_loss=0.04089, over 17941.00 frames. ], tot_loss[loss=0.155, simple_loss=0.2462, pruned_loss=0.03187, over 3582043.98 frames. ], batch size: 65, lr: 3.73e-03, grad_scale: 8.0 2023-03-10 00:58:37,315 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-10 00:58:41,097 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.738e+02 2.430e+02 2.951e+02 3.799e+02 8.497e+02, threshold=5.903e+02, percent-clipped=5.0 2023-03-10 00:58:52,886 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.5436, 2.3064, 2.5265, 2.5183, 3.1904, 4.6861, 4.6630, 3.2124], device='cuda:3'), covar=tensor([0.2168, 0.2788, 0.3266, 0.2245, 0.2518, 0.0317, 0.0361, 0.1108], device='cuda:3'), in_proj_covar=tensor([0.0338, 0.0367, 0.0422, 0.0294, 0.0400, 0.0271, 0.0305, 0.0277], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0002], device='cuda:3') 2023-03-10 00:59:07,027 INFO [train.py:898] (3/4) Epoch 30, batch 1950, loss[loss=0.1637, simple_loss=0.2571, pruned_loss=0.03515, over 18365.00 frames. ], tot_loss[loss=0.155, simple_loss=0.2461, pruned_loss=0.03193, over 3588240.48 frames. ], batch size: 56, lr: 3.73e-03, grad_scale: 8.0 2023-03-10 00:59:09,652 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5224, 2.9823, 4.3589, 3.6483, 2.5460, 4.5966, 3.8908, 2.9103], device='cuda:3'), covar=tensor([0.0619, 0.1391, 0.0317, 0.0532, 0.1676, 0.0237, 0.0620, 0.0970], device='cuda:3'), in_proj_covar=tensor([0.0225, 0.0245, 0.0239, 0.0175, 0.0226, 0.0220, 0.0259, 0.0199], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-10 01:00:04,530 INFO [train.py:898] (3/4) Epoch 30, batch 2000, loss[loss=0.1469, simple_loss=0.2409, pruned_loss=0.02648, over 18411.00 frames. ], tot_loss[loss=0.1548, simple_loss=0.2461, pruned_loss=0.03172, over 3601945.38 frames. ], batch size: 48, lr: 3.73e-03, grad_scale: 8.0 2023-03-10 01:00:38,312 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.734e+02 2.377e+02 2.711e+02 3.457e+02 7.129e+02, threshold=5.422e+02, percent-clipped=2.0 2023-03-10 01:01:03,317 INFO [train.py:898] (3/4) Epoch 30, batch 2050, loss[loss=0.1507, simple_loss=0.2491, pruned_loss=0.0262, over 18587.00 frames. ], tot_loss[loss=0.1551, simple_loss=0.2463, pruned_loss=0.03194, over 3596425.57 frames. ], batch size: 54, lr: 3.73e-03, grad_scale: 8.0 2023-03-10 01:01:53,061 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7937, 3.3946, 4.4819, 2.9125, 3.9403, 2.5972, 2.8514, 1.9385], device='cuda:3'), covar=tensor([0.1302, 0.1050, 0.0252, 0.0972, 0.0630, 0.2583, 0.2666, 0.2342], device='cuda:3'), in_proj_covar=tensor([0.0235, 0.0257, 0.0236, 0.0211, 0.0270, 0.0286, 0.0342, 0.0251], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-10 01:02:02,321 INFO [train.py:898] (3/4) Epoch 30, batch 2100, loss[loss=0.1472, simple_loss=0.2248, pruned_loss=0.03479, over 17830.00 frames. ], tot_loss[loss=0.1543, simple_loss=0.2453, pruned_loss=0.03167, over 3601170.27 frames. ], batch size: 39, lr: 3.73e-03, grad_scale: 8.0 2023-03-10 01:02:13,528 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=107496.0, num_to_drop=1, layers_to_drop={1} 2023-03-10 01:02:27,095 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=107508.0, num_to_drop=1, layers_to_drop={0} 2023-03-10 01:02:30,037 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=107510.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:02:36,351 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.605e+02 2.608e+02 2.930e+02 3.296e+02 8.592e+02, threshold=5.861e+02, percent-clipped=2.0 2023-03-10 01:03:00,502 INFO [train.py:898] (3/4) Epoch 30, batch 2150, loss[loss=0.1475, simple_loss=0.2339, pruned_loss=0.03054, over 18254.00 frames. ], tot_loss[loss=0.1542, simple_loss=0.2449, pruned_loss=0.03169, over 3605435.39 frames. ], batch size: 47, lr: 3.73e-03, grad_scale: 8.0 2023-03-10 01:03:23,373 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8571, 3.6395, 5.3794, 3.2014, 4.7148, 2.6487, 3.0485, 1.8971], device='cuda:3'), covar=tensor([0.1268, 0.1038, 0.0117, 0.0799, 0.0394, 0.2755, 0.2683, 0.2279], device='cuda:3'), in_proj_covar=tensor([0.0235, 0.0257, 0.0237, 0.0211, 0.0269, 0.0286, 0.0342, 0.0252], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-10 01:03:40,493 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=107571.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:03:58,574 INFO [train.py:898] (3/4) Epoch 30, batch 2200, loss[loss=0.1313, simple_loss=0.216, pruned_loss=0.02335, over 18405.00 frames. ], tot_loss[loss=0.1539, simple_loss=0.2447, pruned_loss=0.03149, over 3607048.48 frames. ], batch size: 42, lr: 3.73e-03, grad_scale: 8.0 2023-03-10 01:04:08,628 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7762, 5.2454, 2.6332, 5.1115, 4.9901, 5.2649, 5.0163, 2.6616], device='cuda:3'), covar=tensor([0.0290, 0.0076, 0.0897, 0.0088, 0.0074, 0.0073, 0.0107, 0.1044], device='cuda:3'), in_proj_covar=tensor([0.0095, 0.0085, 0.0099, 0.0101, 0.0091, 0.0081, 0.0088, 0.0100], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-10 01:04:32,917 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.698e+02 2.424e+02 2.775e+02 3.608e+02 1.106e+03, threshold=5.550e+02, percent-clipped=4.0 2023-03-10 01:04:56,686 INFO [train.py:898] (3/4) Epoch 30, batch 2250, loss[loss=0.158, simple_loss=0.2545, pruned_loss=0.03078, over 17820.00 frames. ], tot_loss[loss=0.1545, simple_loss=0.2455, pruned_loss=0.03179, over 3587511.17 frames. ], batch size: 70, lr: 3.73e-03, grad_scale: 8.0 2023-03-10 01:05:22,638 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=107659.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:05:54,194 INFO [train.py:898] (3/4) Epoch 30, batch 2300, loss[loss=0.1566, simple_loss=0.2429, pruned_loss=0.03513, over 18284.00 frames. ], tot_loss[loss=0.1543, simple_loss=0.2452, pruned_loss=0.03174, over 3584427.16 frames. ], batch size: 49, lr: 3.73e-03, grad_scale: 8.0 2023-03-10 01:06:05,925 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2604, 5.2179, 5.5559, 5.5101, 5.1734, 6.0504, 5.6443, 5.2428], device='cuda:3'), covar=tensor([0.1122, 0.0658, 0.0759, 0.0828, 0.1411, 0.0678, 0.0648, 0.1824], device='cuda:3'), in_proj_covar=tensor([0.0380, 0.0309, 0.0338, 0.0343, 0.0343, 0.0451, 0.0304, 0.0447], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-10 01:06:27,337 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.765e+02 2.455e+02 2.885e+02 3.451e+02 1.072e+03, threshold=5.770e+02, percent-clipped=4.0 2023-03-10 01:06:32,248 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=107720.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:06:52,544 INFO [train.py:898] (3/4) Epoch 30, batch 2350, loss[loss=0.1507, simple_loss=0.2345, pruned_loss=0.03344, over 18415.00 frames. ], tot_loss[loss=0.1543, simple_loss=0.2449, pruned_loss=0.03182, over 3586611.05 frames. ], batch size: 43, lr: 3.73e-03, grad_scale: 8.0 2023-03-10 01:06:52,838 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=107737.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:07:51,009 INFO [train.py:898] (3/4) Epoch 30, batch 2400, loss[loss=0.1363, simple_loss=0.2254, pruned_loss=0.02361, over 18389.00 frames. ], tot_loss[loss=0.1544, simple_loss=0.2451, pruned_loss=0.03184, over 3576092.54 frames. ], batch size: 48, lr: 3.73e-03, grad_scale: 8.0 2023-03-10 01:08:01,204 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=107796.0, num_to_drop=1, layers_to_drop={1} 2023-03-10 01:08:03,513 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=107798.0, num_to_drop=1, layers_to_drop={3} 2023-03-10 01:08:14,974 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=107808.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:08:24,299 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.608e+02 2.477e+02 2.910e+02 3.480e+02 7.076e+02, threshold=5.821e+02, percent-clipped=2.0 2023-03-10 01:08:28,939 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.0558, 5.5404, 5.5339, 5.5142, 5.0574, 5.4497, 4.9023, 5.4177], device='cuda:3'), covar=tensor([0.0245, 0.0277, 0.0182, 0.0418, 0.0331, 0.0216, 0.1019, 0.0346], device='cuda:3'), in_proj_covar=tensor([0.0237, 0.0285, 0.0286, 0.0367, 0.0292, 0.0293, 0.0325, 0.0285], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0008, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-10 01:08:48,138 INFO [train.py:898] (3/4) Epoch 30, batch 2450, loss[loss=0.1684, simple_loss=0.2686, pruned_loss=0.03408, over 18208.00 frames. ], tot_loss[loss=0.1544, simple_loss=0.2452, pruned_loss=0.03175, over 3576659.70 frames. ], batch size: 60, lr: 3.72e-03, grad_scale: 8.0 2023-03-10 01:08:51,487 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.9397, 3.7865, 5.0989, 4.4653, 3.4452, 3.1928, 4.6269, 5.3971], device='cuda:3'), covar=tensor([0.0788, 0.1497, 0.0240, 0.0414, 0.0987, 0.1166, 0.0370, 0.0245], device='cuda:3'), in_proj_covar=tensor([0.0158, 0.0290, 0.0182, 0.0191, 0.0202, 0.0200, 0.0207, 0.0220], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0005, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-10 01:08:56,670 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=107844.0, num_to_drop=1, layers_to_drop={1} 2023-03-10 01:09:01,163 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.3882, 5.9881, 5.5307, 5.7287, 5.5930, 5.2944, 6.0188, 5.9933], device='cuda:3'), covar=tensor([0.1169, 0.0767, 0.0533, 0.0730, 0.1353, 0.0735, 0.0617, 0.0664], device='cuda:3'), in_proj_covar=tensor([0.0649, 0.0576, 0.0411, 0.0597, 0.0796, 0.0593, 0.0811, 0.0629], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0004, 0.0005, 0.0004, 0.0006, 0.0004], device='cuda:3') 2023-03-10 01:09:04,780 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7598, 5.2002, 2.4500, 5.0718, 4.8688, 5.1797, 5.0037, 2.4637], device='cuda:3'), covar=tensor([0.0307, 0.0082, 0.1031, 0.0104, 0.0098, 0.0109, 0.0119, 0.1314], device='cuda:3'), in_proj_covar=tensor([0.0095, 0.0085, 0.0099, 0.0101, 0.0091, 0.0081, 0.0088, 0.0099], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-10 01:09:06,286 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.30 vs. limit=5.0 2023-03-10 01:09:10,568 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=107856.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:09:18,687 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.6636, 3.5373, 4.7662, 4.0499, 3.2324, 2.8457, 4.2010, 5.0222], device='cuda:3'), covar=tensor([0.0853, 0.1353, 0.0222, 0.0470, 0.1000, 0.1324, 0.0455, 0.0204], device='cuda:3'), in_proj_covar=tensor([0.0157, 0.0288, 0.0181, 0.0190, 0.0201, 0.0200, 0.0206, 0.0219], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-10 01:09:21,765 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=107866.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:09:46,131 INFO [train.py:898] (3/4) Epoch 30, batch 2500, loss[loss=0.1346, simple_loss=0.2246, pruned_loss=0.02228, over 18489.00 frames. ], tot_loss[loss=0.1542, simple_loss=0.2451, pruned_loss=0.03164, over 3580523.68 frames. ], batch size: 47, lr: 3.72e-03, grad_scale: 8.0 2023-03-10 01:10:19,340 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.763e+02 2.392e+02 2.985e+02 3.620e+02 9.433e+02, threshold=5.970e+02, percent-clipped=3.0 2023-03-10 01:10:44,309 INFO [train.py:898] (3/4) Epoch 30, batch 2550, loss[loss=0.1364, simple_loss=0.2283, pruned_loss=0.02226, over 18504.00 frames. ], tot_loss[loss=0.1542, simple_loss=0.2454, pruned_loss=0.03154, over 3571043.33 frames. ], batch size: 47, lr: 3.72e-03, grad_scale: 8.0 2023-03-10 01:11:15,901 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.8092, 3.1853, 3.9631, 2.9189, 3.6591, 2.6325, 2.8032, 2.2757], device='cuda:3'), covar=tensor([0.1187, 0.1114, 0.0368, 0.0868, 0.0656, 0.2331, 0.2307, 0.1910], device='cuda:3'), in_proj_covar=tensor([0.0236, 0.0258, 0.0237, 0.0212, 0.0270, 0.0286, 0.0343, 0.0252], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:3') 2023-03-10 01:11:42,236 INFO [train.py:898] (3/4) Epoch 30, batch 2600, loss[loss=0.1623, simple_loss=0.2533, pruned_loss=0.03569, over 18483.00 frames. ], tot_loss[loss=0.155, simple_loss=0.246, pruned_loss=0.03198, over 3565828.57 frames. ], batch size: 53, lr: 3.72e-03, grad_scale: 8.0 2023-03-10 01:11:52,592 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8707, 4.5049, 4.5118, 3.4514, 3.8278, 3.4967, 2.7298, 2.6267], device='cuda:3'), covar=tensor([0.0246, 0.0164, 0.0096, 0.0320, 0.0330, 0.0258, 0.0709, 0.0827], device='cuda:3'), in_proj_covar=tensor([0.0078, 0.0065, 0.0073, 0.0074, 0.0094, 0.0072, 0.0080, 0.0089], device='cuda:3'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006], device='cuda:3') 2023-03-10 01:12:19,592 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=108015.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:12:20,528 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.532e+02 2.560e+02 3.005e+02 3.499e+02 6.456e+02, threshold=6.010e+02, percent-clipped=1.0 2023-03-10 01:12:44,879 INFO [train.py:898] (3/4) Epoch 30, batch 2650, loss[loss=0.1404, simple_loss=0.2236, pruned_loss=0.02856, over 18439.00 frames. ], tot_loss[loss=0.1551, simple_loss=0.2461, pruned_loss=0.03204, over 3566875.71 frames. ], batch size: 43, lr: 3.72e-03, grad_scale: 8.0 2023-03-10 01:13:41,197 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.7388, 3.1402, 4.5001, 3.7307, 2.8407, 4.6898, 3.9702, 3.1218], device='cuda:3'), covar=tensor([0.0530, 0.1334, 0.0284, 0.0497, 0.1480, 0.0210, 0.0506, 0.0905], device='cuda:3'), in_proj_covar=tensor([0.0222, 0.0243, 0.0238, 0.0174, 0.0225, 0.0219, 0.0258, 0.0197], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-10 01:13:42,695 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-10 01:13:44,285 INFO [train.py:898] (3/4) Epoch 30, batch 2700, loss[loss=0.1546, simple_loss=0.2451, pruned_loss=0.03204, over 18642.00 frames. ], tot_loss[loss=0.1548, simple_loss=0.2455, pruned_loss=0.032, over 3568913.37 frames. ], batch size: 52, lr: 3.72e-03, grad_scale: 8.0 2023-03-10 01:13:49,958 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=2.87 vs. limit=5.0 2023-03-10 01:13:51,681 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=108093.0, num_to_drop=1, layers_to_drop={3} 2023-03-10 01:14:17,647 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.687e+02 2.532e+02 2.946e+02 3.512e+02 1.150e+03, threshold=5.891e+02, percent-clipped=1.0 2023-03-10 01:14:38,686 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.1728, 4.3016, 2.7875, 4.2998, 5.3720, 3.2234, 3.9344, 3.9619], device='cuda:3'), covar=tensor([0.0191, 0.1566, 0.1463, 0.0598, 0.0138, 0.0928, 0.0695, 0.0895], device='cuda:3'), in_proj_covar=tensor([0.0190, 0.0289, 0.0215, 0.0205, 0.0150, 0.0188, 0.0225, 0.0235], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0004, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0003], device='cuda:3') 2023-03-10 01:14:42,600 INFO [train.py:898] (3/4) Epoch 30, batch 2750, loss[loss=0.18, simple_loss=0.2632, pruned_loss=0.04842, over 18374.00 frames. ], tot_loss[loss=0.1553, simple_loss=0.2462, pruned_loss=0.03218, over 3564463.20 frames. ], batch size: 46, lr: 3.72e-03, grad_scale: 8.0 2023-03-10 01:14:50,200 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.8096, 4.3881, 4.3275, 3.3569, 3.6188, 3.3406, 2.5691, 2.6898], device='cuda:3'), covar=tensor([0.0220, 0.0146, 0.0096, 0.0325, 0.0344, 0.0256, 0.0728, 0.0734], device='cuda:3'), in_proj_covar=tensor([0.0077, 0.0065, 0.0072, 0.0073, 0.0094, 0.0072, 0.0080, 0.0088], device='cuda:3'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0007, 0.0005, 0.0006, 0.0006], device='cuda:3') 2023-03-10 01:15:16,466 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=108166.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:15:41,166 INFO [train.py:898] (3/4) Epoch 30, batch 2800, loss[loss=0.126, simple_loss=0.2115, pruned_loss=0.02025, over 18496.00 frames. ], tot_loss[loss=0.1547, simple_loss=0.2456, pruned_loss=0.03192, over 3562184.95 frames. ], batch size: 44, lr: 3.72e-03, grad_scale: 8.0 2023-03-10 01:16:12,747 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=108214.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:16:14,905 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.768e+02 2.463e+02 2.976e+02 3.777e+02 6.000e+02, threshold=5.952e+02, percent-clipped=1.0 2023-03-10 01:16:40,172 INFO [train.py:898] (3/4) Epoch 30, batch 2850, loss[loss=0.1558, simple_loss=0.2535, pruned_loss=0.02902, over 17742.00 frames. ], tot_loss[loss=0.1545, simple_loss=0.2454, pruned_loss=0.03187, over 3560037.15 frames. ], batch size: 70, lr: 3.72e-03, grad_scale: 8.0 2023-03-10 01:16:47,364 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.4253, 2.9301, 4.0344, 3.4631, 2.6014, 4.2147, 3.7350, 2.8646], device='cuda:3'), covar=tensor([0.0568, 0.1353, 0.0376, 0.0478, 0.1542, 0.0223, 0.0567, 0.0881], device='cuda:3'), in_proj_covar=tensor([0.0222, 0.0243, 0.0238, 0.0174, 0.0226, 0.0218, 0.0258, 0.0197], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-10 01:17:08,482 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9208, 5.4763, 2.8314, 5.3067, 5.2188, 5.4777, 5.2942, 2.7958], device='cuda:3'), covar=tensor([0.0237, 0.0063, 0.0753, 0.0075, 0.0077, 0.0075, 0.0089, 0.0984], device='cuda:3'), in_proj_covar=tensor([0.0095, 0.0085, 0.0099, 0.0101, 0.0091, 0.0081, 0.0087, 0.0099], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0005, 0.0004, 0.0004, 0.0005, 0.0005], device='cuda:3') 2023-03-10 01:17:09,665 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6432, 3.3747, 2.2700, 4.4499, 3.1276, 4.1422, 2.6386, 3.8650], device='cuda:3'), covar=tensor([0.0698, 0.0951, 0.1574, 0.0451, 0.0853, 0.0333, 0.1229, 0.0468], device='cuda:3'), in_proj_covar=tensor([0.0227, 0.0237, 0.0199, 0.0302, 0.0201, 0.0272, 0.0211, 0.0211], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:3') 2023-03-10 01:17:38,312 INFO [train.py:898] (3/4) Epoch 30, batch 2900, loss[loss=0.1409, simple_loss=0.2295, pruned_loss=0.0262, over 18297.00 frames. ], tot_loss[loss=0.1546, simple_loss=0.2457, pruned_loss=0.03178, over 3562114.23 frames. ], batch size: 49, lr: 3.72e-03, grad_scale: 8.0 2023-03-10 01:18:11,949 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=108315.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:18:12,788 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.844e+02 2.541e+02 3.023e+02 3.918e+02 9.927e+02, threshold=6.045e+02, percent-clipped=2.0 2023-03-10 01:18:23,044 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=108325.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:18:25,896 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.4109, 5.3053, 5.7388, 5.6893, 5.3495, 6.2082, 5.8987, 5.4836], device='cuda:3'), covar=tensor([0.1126, 0.0607, 0.0826, 0.0725, 0.1393, 0.0695, 0.0663, 0.1620], device='cuda:3'), in_proj_covar=tensor([0.0379, 0.0308, 0.0336, 0.0339, 0.0342, 0.0448, 0.0302, 0.0443], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-10 01:18:36,945 INFO [train.py:898] (3/4) Epoch 30, batch 2950, loss[loss=0.1643, simple_loss=0.2619, pruned_loss=0.03339, over 17049.00 frames. ], tot_loss[loss=0.1544, simple_loss=0.2455, pruned_loss=0.0316, over 3572895.95 frames. ], batch size: 78, lr: 3.72e-03, grad_scale: 8.0 2023-03-10 01:19:03,222 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.62 vs. limit=2.0 2023-03-10 01:19:03,906 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2198, 5.1851, 5.5089, 5.5149, 5.1564, 5.9835, 5.6692, 5.2663], device='cuda:3'), covar=tensor([0.1112, 0.0654, 0.0787, 0.0820, 0.1461, 0.0720, 0.0638, 0.1776], device='cuda:3'), in_proj_covar=tensor([0.0379, 0.0308, 0.0336, 0.0339, 0.0342, 0.0449, 0.0303, 0.0444], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-10 01:19:07,161 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=108363.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:19:22,331 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.5418, 3.4743, 3.3799, 2.8835, 3.2362, 2.6214, 2.7755, 3.4959], device='cuda:3'), covar=tensor([0.0096, 0.0143, 0.0122, 0.0236, 0.0161, 0.0319, 0.0288, 0.0102], device='cuda:3'), in_proj_covar=tensor([0.0164, 0.0183, 0.0153, 0.0204, 0.0163, 0.0194, 0.0199, 0.0142], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-10 01:19:29,516 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.6637, 3.6905, 3.5017, 3.1691, 3.4033, 2.8991, 2.8907, 3.6936], device='cuda:3'), covar=tensor([0.0081, 0.0098, 0.0099, 0.0152, 0.0129, 0.0201, 0.0216, 0.0072], device='cuda:3'), in_proj_covar=tensor([0.0164, 0.0183, 0.0152, 0.0204, 0.0163, 0.0193, 0.0199, 0.0141], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0003, 0.0002], device='cuda:3') 2023-03-10 01:19:34,191 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=108386.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:19:34,896 INFO [train.py:898] (3/4) Epoch 30, batch 3000, loss[loss=0.1526, simple_loss=0.2489, pruned_loss=0.02816, over 18288.00 frames. ], tot_loss[loss=0.1544, simple_loss=0.2455, pruned_loss=0.0317, over 3578547.84 frames. ], batch size: 57, lr: 3.72e-03, grad_scale: 8.0 2023-03-10 01:19:34,896 INFO [train.py:923] (3/4) Computing validation loss 2023-03-10 01:19:45,953 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([2.7241, 2.3251, 2.2311, 2.3240, 2.8807, 2.8615, 2.7152, 2.4508], device='cuda:3'), covar=tensor([0.0246, 0.0306, 0.0595, 0.0441, 0.0230, 0.0224, 0.0454, 0.0433], device='cuda:3'), in_proj_covar=tensor([0.0150, 0.0152, 0.0168, 0.0169, 0.0147, 0.0133, 0.0163, 0.0168], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0003, 0.0003, 0.0004], device='cuda:3') 2023-03-10 01:19:47,077 INFO [train.py:932] (3/4) Epoch 30, validation: loss=0.1491, simple_loss=0.2469, pruned_loss=0.02567, over 944034.00 frames. 2023-03-10 01:19:47,078 INFO [train.py:933] (3/4) Maximum memory allocated so far is 19934MB 2023-03-10 01:19:54,099 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=108393.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:20:20,293 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.803e+02 2.459e+02 2.931e+02 3.720e+02 6.119e+02, threshold=5.863e+02, percent-clipped=1.0 2023-03-10 01:20:39,264 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=108432.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:20:45,381 INFO [train.py:898] (3/4) Epoch 30, batch 3050, loss[loss=0.1556, simple_loss=0.2506, pruned_loss=0.03036, over 18480.00 frames. ], tot_loss[loss=0.1546, simple_loss=0.2458, pruned_loss=0.03176, over 3582328.12 frames. ], batch size: 53, lr: 3.71e-03, grad_scale: 8.0 2023-03-10 01:20:50,590 INFO [zipformer.py:625] (3/4) warmup_begin=666.7, warmup_end=1333.3, batch_count=108441.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:21:18,000 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=4.49 vs. limit=5.0 2023-03-10 01:21:29,780 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.94 vs. limit=5.0 2023-03-10 01:21:43,461 INFO [train.py:898] (3/4) Epoch 30, batch 3100, loss[loss=0.172, simple_loss=0.2654, pruned_loss=0.0393, over 18057.00 frames. ], tot_loss[loss=0.1548, simple_loss=0.2459, pruned_loss=0.03189, over 3578906.22 frames. ], batch size: 65, lr: 3.71e-03, grad_scale: 8.0 2023-03-10 01:21:51,783 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=108493.0, num_to_drop=1, layers_to_drop={0} 2023-03-10 01:22:17,724 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.743e+02 2.471e+02 2.835e+02 3.390e+02 1.271e+03, threshold=5.670e+02, percent-clipped=3.0 2023-03-10 01:22:24,121 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=192, metric=1.54 vs. limit=2.0 2023-03-10 01:22:41,854 INFO [train.py:898] (3/4) Epoch 30, batch 3150, loss[loss=0.1702, simple_loss=0.2592, pruned_loss=0.04061, over 18256.00 frames. ], tot_loss[loss=0.1548, simple_loss=0.2462, pruned_loss=0.03169, over 3587701.00 frames. ], batch size: 60, lr: 3.71e-03, grad_scale: 8.0 2023-03-10 01:23:22,195 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9588, 5.4813, 5.4687, 5.4392, 4.9784, 5.3957, 4.9037, 5.4020], device='cuda:3'), covar=tensor([0.0242, 0.0250, 0.0169, 0.0416, 0.0372, 0.0225, 0.0935, 0.0284], device='cuda:3'), in_proj_covar=tensor([0.0237, 0.0282, 0.0284, 0.0363, 0.0291, 0.0292, 0.0321, 0.0282], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0008, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-10 01:23:29,767 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=108577.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:23:40,620 INFO [train.py:898] (3/4) Epoch 30, batch 3200, loss[loss=0.1573, simple_loss=0.2456, pruned_loss=0.03448, over 18296.00 frames. ], tot_loss[loss=0.1543, simple_loss=0.2454, pruned_loss=0.03155, over 3576816.99 frames. ], batch size: 49, lr: 3.71e-03, grad_scale: 8.0 2023-03-10 01:24:16,501 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.665e+02 2.358e+02 2.876e+02 3.361e+02 7.357e+02, threshold=5.753e+02, percent-clipped=3.0 2023-03-10 01:24:39,475 INFO [train.py:898] (3/4) Epoch 30, batch 3250, loss[loss=0.1494, simple_loss=0.2478, pruned_loss=0.02548, over 16089.00 frames. ], tot_loss[loss=0.1543, simple_loss=0.2453, pruned_loss=0.03162, over 3577424.94 frames. ], batch size: 94, lr: 3.71e-03, grad_scale: 8.0 2023-03-10 01:24:41,093 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=108638.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:25:15,612 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.7332, 5.2287, 5.1992, 5.2028, 4.7162, 5.1281, 4.6693, 5.1320], device='cuda:3'), covar=tensor([0.0261, 0.0293, 0.0215, 0.0493, 0.0418, 0.0265, 0.1018, 0.0343], device='cuda:3'), in_proj_covar=tensor([0.0237, 0.0282, 0.0284, 0.0363, 0.0290, 0.0292, 0.0320, 0.0282], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0008, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-10 01:25:31,371 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=108681.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:25:37,772 INFO [train.py:898] (3/4) Epoch 30, batch 3300, loss[loss=0.1389, simple_loss=0.2347, pruned_loss=0.02153, over 18270.00 frames. ], tot_loss[loss=0.1544, simple_loss=0.2456, pruned_loss=0.03158, over 3586069.38 frames. ], batch size: 49, lr: 3.71e-03, grad_scale: 8.0 2023-03-10 01:26:12,808 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.499e+02 2.588e+02 2.944e+02 3.535e+02 1.316e+03, threshold=5.888e+02, percent-clipped=3.0 2023-03-10 01:26:36,460 INFO [train.py:898] (3/4) Epoch 30, batch 3350, loss[loss=0.1698, simple_loss=0.2605, pruned_loss=0.03957, over 18290.00 frames. ], tot_loss[loss=0.1538, simple_loss=0.2449, pruned_loss=0.03133, over 3581823.55 frames. ], batch size: 57, lr: 3.71e-03, grad_scale: 8.0 2023-03-10 01:26:56,119 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([5.2402, 5.1599, 5.5425, 5.5459, 5.1971, 6.0708, 5.7065, 5.2151], device='cuda:3'), covar=tensor([0.1089, 0.0647, 0.0809, 0.0819, 0.1397, 0.0656, 0.0697, 0.1835], device='cuda:3'), in_proj_covar=tensor([0.0381, 0.0309, 0.0338, 0.0342, 0.0345, 0.0451, 0.0306, 0.0448], device='cuda:3'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004], device='cuda:3') 2023-03-10 01:27:10,587 INFO [scaling.py:679] (3/4) Whitening: num_groups=1, num_channels=384, metric=3.71 vs. limit=5.0 2023-03-10 01:27:34,247 INFO [train.py:898] (3/4) Epoch 30, batch 3400, loss[loss=0.1517, simple_loss=0.237, pruned_loss=0.03324, over 18362.00 frames. ], tot_loss[loss=0.1542, simple_loss=0.2451, pruned_loss=0.0316, over 3579687.87 frames. ], batch size: 46, lr: 3.71e-03, grad_scale: 8.0 2023-03-10 01:27:35,607 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=108788.0, num_to_drop=1, layers_to_drop={3} 2023-03-10 01:27:39,015 INFO [zipformer.py:625] (3/4) warmup_begin=2666.7, warmup_end=3333.3, batch_count=108791.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:28:09,511 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.896e+02 2.642e+02 3.152e+02 3.667e+02 6.078e+02, threshold=6.305e+02, percent-clipped=1.0 2023-03-10 01:28:33,122 INFO [train.py:898] (3/4) Epoch 30, batch 3450, loss[loss=0.1665, simple_loss=0.2616, pruned_loss=0.03572, over 18561.00 frames. ], tot_loss[loss=0.1534, simple_loss=0.2443, pruned_loss=0.03128, over 3592408.60 frames. ], batch size: 54, lr: 3.71e-03, grad_scale: 8.0 2023-03-10 01:28:50,315 INFO [zipformer.py:625] (3/4) warmup_begin=3333.3, warmup_end=4000.0, batch_count=108852.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:29:31,190 INFO [train.py:898] (3/4) Epoch 30, batch 3500, loss[loss=0.1567, simple_loss=0.2543, pruned_loss=0.02951, over 16038.00 frames. ], tot_loss[loss=0.1535, simple_loss=0.2446, pruned_loss=0.0312, over 3595155.85 frames. ], batch size: 94, lr: 3.71e-03, grad_scale: 8.0 2023-03-10 01:29:48,255 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.9204, 5.0311, 5.0057, 4.7283, 4.6928, 4.7517, 5.0752, 5.0965], device='cuda:3'), covar=tensor([0.0085, 0.0075, 0.0075, 0.0149, 0.0074, 0.0178, 0.0106, 0.0132], device='cuda:3'), in_proj_covar=tensor([0.0102, 0.0077, 0.0082, 0.0102, 0.0082, 0.0110, 0.0094, 0.0093], device='cuda:3'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0004], device='cuda:3') 2023-03-10 01:29:57,202 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.9365, 3.3433, 4.5927, 3.8538, 2.9674, 4.8989, 4.0579, 3.2154], device='cuda:3'), covar=tensor([0.0466, 0.1229, 0.0259, 0.0486, 0.1393, 0.0172, 0.0571, 0.0847], device='cuda:3'), in_proj_covar=tensor([0.0225, 0.0248, 0.0241, 0.0177, 0.0230, 0.0222, 0.0262, 0.0201], device='cuda:3'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:3') 2023-03-10 01:30:00,253 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([4.8054, 5.2732, 5.2116, 5.2499, 4.7489, 5.1616, 4.7089, 5.1772], device='cuda:3'), covar=tensor([0.0236, 0.0273, 0.0205, 0.0474, 0.0394, 0.0236, 0.0969, 0.0327], device='cuda:3'), in_proj_covar=tensor([0.0236, 0.0281, 0.0283, 0.0361, 0.0289, 0.0290, 0.0319, 0.0280], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0006, 0.0006, 0.0008, 0.0005, 0.0006, 0.0006, 0.0006], device='cuda:3') 2023-03-10 01:30:05,389 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.699e+02 2.567e+02 2.966e+02 3.432e+02 4.808e+02, threshold=5.933e+02, percent-clipped=0.0 2023-03-10 01:30:23,126 INFO [zipformer.py:625] (3/4) warmup_begin=1333.3, warmup_end=2000.0, batch_count=108933.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:30:27,372 INFO [train.py:898] (3/4) Epoch 30, batch 3550, loss[loss=0.155, simple_loss=0.2479, pruned_loss=0.03106, over 18333.00 frames. ], tot_loss[loss=0.1531, simple_loss=0.2441, pruned_loss=0.0311, over 3586451.73 frames. ], batch size: 54, lr: 3.71e-03, grad_scale: 8.0 2023-03-10 01:31:14,399 INFO [zipformer.py:625] (3/4) warmup_begin=2000.0, warmup_end=2666.7, batch_count=108981.0, num_to_drop=0, layers_to_drop=set() 2023-03-10 01:31:20,791 INFO [train.py:898] (3/4) Epoch 30, batch 3600, loss[loss=0.1538, simple_loss=0.2525, pruned_loss=0.02759, over 18348.00 frames. ], tot_loss[loss=0.154, simple_loss=0.2453, pruned_loss=0.03134, over 3575303.70 frames. ], batch size: 55, lr: 3.71e-03, grad_scale: 8.0 2023-03-10 01:31:47,035 INFO [scaling.py:679] (3/4) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-10 01:31:47,446 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0716, 3.4294, 3.3282, 2.9111, 3.0476, 2.8079, 2.5121, 2.4138], device='cuda:3'), covar=tensor([0.0292, 0.0167, 0.0152, 0.0320, 0.0351, 0.0286, 0.0623, 0.0709], device='cuda:3'), in_proj_covar=tensor([0.0077, 0.0064, 0.0071, 0.0073, 0.0093, 0.0071, 0.0079, 0.0088], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006], device='cuda:3') 2023-03-10 01:31:51,356 INFO [zipformer.py:1455] (3/4) attn_weights_entropy = tensor([3.0908, 3.4335, 3.3426, 2.8920, 3.0768, 2.8024, 2.4695, 2.3680], device='cuda:3'), covar=tensor([0.0274, 0.0186, 0.0159, 0.0338, 0.0366, 0.0270, 0.0649, 0.0731], device='cuda:3'), in_proj_covar=tensor([0.0077, 0.0064, 0.0071, 0.0073, 0.0093, 0.0071, 0.0079, 0.0088], device='cuda:3'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0006, 0.0005, 0.0006, 0.0006], device='cuda:3') 2023-03-10 01:31:52,858 INFO [optim.py:369] (3/4) Clipping_scale=2.0, grad-norm quartiles 1.673e+02 2.589e+02 3.051e+02 3.699e+02 9.146e+02, threshold=6.102e+02, percent-clipped=6.0 2023-03-10 01:31:56,932 INFO [train.py:1165] (3/4) Done!