2023-05-11 11:12:24,089 INFO [train.py:1091] (0/2) Training started 2023-05-11 11:12:24,092 INFO [train.py:1101] (0/2) Device: cuda:0 2023-05-11 11:12:24,096 INFO [train.py:1110] (0/2) {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.23.4', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '7efe024b23078ffa0bcb5598afff14f356edae7c', 'k2-git-date': 'Mon Jan 30 20:22:57 2023', 'lhotse-version': '1.12.0.dev+git.891bad1.clean', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'from_dan_scaled_adam_exp1119', 'icefall-git-sha1': '432b2fa3-dirty', 'icefall-git-date': 'Mon May 8 18:46:45 2023', 'icefall-path': '/ceph-zw/workspace/zipformer/icefall_dan_streaming', 'k2-path': '/ceph-zw/workspace/k2/k2/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-zw/workspace/share/lhotse/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-7-1218101249-5d97868c7c-v8ngc', 'IP address': '10.177.77.18'}, 'world_size': 2, 'master_port': 12348, 'tensorboard': True, 'num_epochs': 40, 'start_epoch': 36, 'start_batch': 0, 'exp_dir': PosixPath('pruned_transducer_stateless7/exp1119-smaller-md1500'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'base_lr': 0.04, 'lr_batches': 7500, 'lr_epochs': 3.5, 'lr_warmup_start': 0.5, 'ref_duration': 600, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'seed': 42, 'print_diagnostics': False, 'inf_check': False, 'save_every_n': 4000, 'keep_last_k': 30, 'average_period': 200, 'use_fp16': True, 'num_encoder_layers': '2,2,2,2,2,2', 'downsampling_factor': '1,2,4,8,4,2', 'feedforward_dim': '512,768,768,768,768,768', 'num_heads': '4,4,4,8,4,4', 'encoder_dim': '192,256,256,256,256,256', 'query_head_dim': '32', 'value_head_dim': '12', 'pos_head_dim': '4', 'pos_dim': 48, 'encoder_unmasked_dim': '192,192,192,192,192,192', 'cnn_module_kernel': '31,31,15,15,15,31', 'decoder_dim': 512, 'joiner_dim': 512, 'causal': False, 'chunk_size': '16,32,64,-1', 'left_context_frames': '64,128,256,-1', 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 1500, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'blank_id': 0, 'vocab_size': 500} 2023-05-11 11:12:24,096 INFO [train.py:1112] (0/2) About to create model 2023-05-11 11:12:24,440 INFO [train.py:1116] (0/2) Number of model parameters: 23285615 2023-05-11 11:12:24,794 INFO [checkpoint.py:112] (0/2) Loading checkpoint from pruned_transducer_stateless7/exp1119-smaller-md1500/epoch-35.pt 2023-05-11 11:12:25,379 INFO [checkpoint.py:131] (0/2) Loading averaged model 2023-05-11 11:12:30,550 INFO [train.py:1131] (0/2) Using DDP 2023-05-11 11:12:30,765 INFO [train.py:1145] (0/2) Loading optimizer state dict 2023-05-11 11:12:30,952 INFO [train.py:1153] (0/2) Loading scheduler state dict 2023-05-11 11:12:30,952 INFO [asr_datamodule.py:409] (0/2) About to get train-clean-100 cuts 2023-05-11 11:12:30,975 INFO [asr_datamodule.py:416] (0/2) About to get train-clean-360 cuts 2023-05-11 11:12:30,978 INFO [asr_datamodule.py:423] (0/2) About to get train-other-500 cuts 2023-05-11 11:12:30,979 INFO [asr_datamodule.py:225] (0/2) Enable MUSAN 2023-05-11 11:12:30,979 INFO [asr_datamodule.py:226] (0/2) About to get Musan cuts 2023-05-11 11:12:33,433 INFO [asr_datamodule.py:254] (0/2) Enable SpecAugment 2023-05-11 11:12:33,434 INFO [asr_datamodule.py:255] (0/2) Time warp factor: 80 2023-05-11 11:12:33,434 INFO [asr_datamodule.py:267] (0/2) Num frame mask: 10 2023-05-11 11:12:33,434 INFO [asr_datamodule.py:280] (0/2) About to create train dataset 2023-05-11 11:12:33,434 INFO [asr_datamodule.py:309] (0/2) Using DynamicBucketingSampler. 2023-05-11 11:12:37,018 WARNING [train.py:1182] (0/2) Exclude cut with ID 7859-102521-0017-7548-0_sp1.1 from training. Duration: 22.2954375 2023-05-11 11:12:38,235 INFO [asr_datamodule.py:324] (0/2) About to create train dataloader 2023-05-11 11:12:38,236 INFO [asr_datamodule.py:430] (0/2) About to get dev-clean cuts 2023-05-11 11:12:38,237 INFO [asr_datamodule.py:437] (0/2) About to get dev-other cuts 2023-05-11 11:12:38,238 INFO [asr_datamodule.py:355] (0/2) About to create dev dataset 2023-05-11 11:12:38,492 INFO [asr_datamodule.py:374] (0/2) About to create dev dataloader 2023-05-11 11:12:38,492 INFO [train.py:1329] (0/2) Sanity check -- see if any of the batches in epoch 1 would cause OOM. 2023-05-11 11:12:42,345 WARNING [train.py:1182] (0/2) Exclude cut with ID 7859-102521-0017-7548-0_sp1.1 from training. Duration: 22.2954375 2023-05-11 11:12:47,448 WARNING [train.py:1182] (0/2) Exclude cut with ID 7859-102521-0017-7548-0_sp1.1 from training. Duration: 22.2954375 2023-05-11 11:12:51,092 WARNING [train.py:1182] (0/2) Exclude cut with ID 298-126791-0067-24026-0_sp0.9 from training. Duration: 21.438875 2023-05-11 11:12:51,273 WARNING [train.py:1182] (0/2) Exclude cut with ID 5652-39938-0025-23684-0_sp0.9 from training. Duration: 22.2055625 2023-05-11 11:12:58,553 WARNING [train.py:1182] (0/2) Exclude cut with ID 7859-102521-0017-7548-0 from training. Duration: 24.525 2023-05-11 11:12:59,746 WARNING [train.py:1182] (0/2) Exclude cut with ID 3699-47246-0007-3408-0_sp0.9 from training. Duration: 20.26675 2023-05-11 11:13:00,209 WARNING [train.py:1182] (0/2) Exclude cut with ID 7859-102521-0017-7548-0_sp0.9 from training. Duration: 27.25 2023-05-11 11:13:03,429 WARNING [train.py:1182] (0/2) Exclude cut with ID 6426-64292-0017-15984-0 from training. Duration: 21.68 2023-05-11 11:13:03,843 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0007-59342-0 from training. Duration: 21.6300625 2023-05-11 11:13:04,670 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0007-59342-0_sp0.9 from training. Duration: 24.033375 2023-05-11 11:13:07,219 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0009-59344-0 from training. Duration: 22.905 2023-05-11 11:13:07,259 WARNING [train.py:1182] (0/2) Exclude cut with ID 5622-44585-0006-90525-0_sp1.1 from training. Duration: 23.4318125 2023-05-11 11:13:11,955 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0009-59344-0_sp1.1 from training. Duration: 20.82275 2023-05-11 11:13:12,004 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0009-59344-0_sp0.9 from training. Duration: 25.45 2023-05-11 11:13:14,138 WARNING [train.py:1182] (0/2) Exclude cut with ID 5622-44585-0006-90525-0 from training. Duration: 25.775 2023-05-11 11:13:14,895 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0071-62375-0_sp0.9 from training. Duration: 22.25 2023-05-11 11:13:15,837 WARNING [train.py:1182] (0/2) Exclude cut with ID 3972-170212-0014-23379-0 from training. Duration: 26.205 2023-05-11 11:13:17,009 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0047-9341-0_sp0.9 from training. Duration: 30.1555625 2023-05-11 11:13:17,200 WARNING [train.py:1182] (0/2) Exclude cut with ID 1265-135635-0050-6781-0_sp0.9 from training. Duration: 21.8333125 2023-05-11 11:13:17,512 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0024-94914-0_sp1.1 from training. Duration: 20.6545625 2023-05-11 11:13:18,971 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0045-39920-0_sp0.9 from training. Duration: 20.52225 2023-05-11 11:13:19,715 WARNING [train.py:1182] (0/2) Exclude cut with ID 3972-170212-0014-23379-0_sp0.9 from training. Duration: 29.1166875 2023-05-11 11:13:22,360 WARNING [train.py:1182] (0/2) Exclude cut with ID 543-133211-0007-59831-0_sp0.9 from training. Duration: 21.388875 2023-05-11 11:13:23,611 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0024-94914-0 from training. Duration: 22.72 2023-05-11 11:13:23,661 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0031-94921-0_sp0.9 from training. Duration: 22.7444375 2023-05-11 11:13:25,142 WARNING [train.py:1182] (0/2) Exclude cut with ID 4133-6541-0027-40495-0_sp1.1 from training. Duration: 0.9681875 2023-05-11 11:13:25,275 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62851-0022-91297-0_sp0.9 from training. Duration: 22.3166875 2023-05-11 11:13:26,042 WARNING [train.py:1182] (0/2) Exclude cut with ID 543-133212-0015-59917-0_sp0.9 from training. Duration: 21.8166875 2023-05-11 11:13:29,662 WARNING [train.py:1182] (0/2) Exclude cut with ID 4957-30119-0041-23990-0_sp0.9 from training. Duration: 20.22775 2023-05-11 11:13:31,775 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0047-9341-0_sp1.1 from training. Duration: 24.67275 2023-05-11 11:13:32,754 WARNING [train.py:1182] (0/2) Exclude cut with ID 3082-165428-0081-50734-0_sp0.9 from training. Duration: 21.8055625 2023-05-11 11:13:34,059 WARNING [train.py:1182] (0/2) Exclude cut with ID 3340-169293-0054-76830-0_sp0.9 from training. Duration: 22.6666875 2023-05-11 11:13:36,828 WARNING [train.py:1182] (0/2) Exclude cut with ID 2411-132532-0017-82279-0_sp1.1 from training. Duration: 0.9681875 2023-05-11 11:13:37,778 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62850-0007-91323-0 from training. Duration: 22.485 2023-05-11 11:13:39,668 WARNING [train.py:1182] (0/2) Exclude cut with ID 3972-170212-0014-23379-0_sp1.1 from training. Duration: 23.82275 2023-05-11 11:13:40,140 WARNING [train.py:1182] (0/2) Exclude cut with ID 4860-13185-0032-76709-0 from training. Duration: 20.77 2023-05-11 11:13:40,403 WARNING [train.py:1182] (0/2) Exclude cut with ID 6426-64292-0017-15984-0_sp0.9 from training. Duration: 24.088875 2023-05-11 11:13:41,434 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62850-0007-91323-0_sp1.1 from training. Duration: 20.4409375 2023-05-11 11:13:44,727 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0062-62366-0_sp0.9 from training. Duration: 22.511125 2023-05-11 11:13:44,750 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0031-39906-0 from training. Duration: 20.675 2023-05-11 11:13:48,270 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62850-0007-91323-0_sp0.9 from training. Duration: 24.9833125 2023-05-11 11:13:49,986 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0047-9341-0 from training. Duration: 27.14 2023-05-11 11:13:50,533 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0044-62348-0 from training. Duration: 22.44 2023-05-11 11:13:53,876 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0060-62364-0_sp0.9 from training. Duration: 21.361125 2023-05-11 11:13:54,106 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0079-62383-0_sp1.1 from training. Duration: 27.0318125 2023-05-11 11:13:54,455 WARNING [train.py:1182] (0/2) Exclude cut with ID 5622-44585-0006-90525-0_sp0.9 from training. Duration: 28.638875 2023-05-11 11:13:55,022 WARNING [train.py:1182] (0/2) Exclude cut with ID 3340-169293-0054-76830-0 from training. Duration: 20.4 2023-05-11 11:13:56,164 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0071-62375-0 from training. Duration: 20.025 2023-05-11 11:13:56,173 WARNING [train.py:1182] (0/2) Exclude cut with ID 2364-131735-0112-64612-0_sp0.9 from training. Duration: 20.488875 2023-05-11 11:13:56,373 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0079-62383-0 from training. Duration: 29.735 2023-05-11 11:13:59,919 WARNING [train.py:1182] (0/2) Exclude cut with ID 7276-92427-0014-12983-0_sp0.9 from training. Duration: 21.3055625 2023-05-11 11:13:59,968 WARNING [train.py:1182] (0/2) Exclude cut with ID 1025-75365-0008-79168-0_sp0.9 from training. Duration: 22.0666875 2023-05-11 11:14:04,315 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0062-62366-0 from training. Duration: 20.26 2023-05-11 11:14:04,799 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0030-9324-0_sp0.9 from training. Duration: 21.3444375 2023-05-11 11:14:06,777 WARNING [train.py:1182] (0/2) Exclude cut with ID 497-129325-0061-62254-0_sp1.1 from training. Duration: 0.97725 2023-05-11 11:14:08,844 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0031-39906-0_sp0.9 from training. Duration: 22.97225 2023-05-11 11:14:09,933 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0047-39922-0_sp0.9 from training. Duration: 21.97775 2023-05-11 11:14:10,411 WARNING [train.py:1182] (0/2) Exclude cut with ID 1112-1043-0006-89194-0_sp0.9 from training. Duration: 21.8333125 2023-05-11 11:14:10,785 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0031-94921-0 from training. Duration: 20.47 2023-05-11 11:14:13,688 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0037-39912-0_sp0.9 from training. Duration: 20.67225 2023-05-11 11:14:14,377 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0024-94914-0_sp0.9 from training. Duration: 25.2444375 2023-05-11 11:14:15,230 WARNING [train.py:1182] (0/2) Exclude cut with ID 3340-169293-0021-76797-0_sp0.9 from training. Duration: 21.1445 2023-05-11 11:14:18,415 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0079-62383-0_sp0.9 from training. Duration: 33.038875 2023-05-11 11:14:19,875 WARNING [train.py:1182] (0/2) Exclude cut with ID 6426-64291-0000-16059-0_sp0.9 from training. Duration: 20.0944375 2023-05-11 11:14:20,444 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0044-62348-0_sp1.1 from training. Duration: 20.4 2023-05-11 11:14:20,735 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62851-0022-91297-0 from training. Duration: 20.085 2023-05-11 11:14:21,132 WARNING [train.py:1182] (0/2) Exclude cut with ID 4860-13185-0032-76709-0_sp0.9 from training. Duration: 23.07775 2023-05-11 11:14:23,441 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0044-62348-0_sp0.9 from training. Duration: 24.9333125 2023-05-11 11:14:25,029 WARNING [train.py:1182] (0/2) Exclude cut with ID 5118-111612-0016-124680-0_sp0.9 from training. Duration: 20.388875 2023-05-11 11:14:25,259 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0017-62487-0_sp1.1 from training. Duration: 20.3590625 2023-05-11 11:14:28,242 WARNING [train.py:1182] (0/2) Exclude cut with ID 3557-8342-0013-54691-0_sp1.1 from training. Duration: 0.836375 2023-05-11 11:14:29,742 WARNING [train.py:1182] (0/2) Exclude cut with ID 8565-290391-0049-67394-0_sp0.9 from training. Duration: 21.3166875 2023-05-11 11:14:31,391 WARNING [train.py:1182] (0/2) Exclude cut with ID 6533-399-0029-104863-0_sp0.9 from training. Duration: 22.1055625 2023-05-11 11:14:31,781 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0094-26379-0_sp1.1 from training. Duration: 21.77725 2023-05-11 11:14:32,524 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0005-134304-0_sp0.9 from training. Duration: 27.8166875 2023-05-11 11:14:33,388 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0021-15852-0_sp1.1 from training. Duration: 22.5090625 2023-05-11 11:14:33,598 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0005-134304-0 from training. Duration: 25.035 2023-05-11 11:14:34,225 WARNING [train.py:1182] (0/2) Exclude cut with ID 774-127930-0014-10412-0_sp1.1 from training. Duration: 0.95 2023-05-11 11:14:34,871 WARNING [train.py:1182] (0/2) Exclude cut with ID 3033-130750-0096-55598-0_sp0.9 from training. Duration: 0.92225 2023-05-11 11:14:36,363 WARNING [train.py:1182] (0/2) Exclude cut with ID 4511-76322-0006-80011-0 from training. Duration: 21.97 2023-05-11 11:14:37,018 WARNING [train.py:1182] (0/2) Exclude cut with ID 7492-105653-0055-62765-0_sp0.9 from training. Duration: 21.97225 2023-05-11 11:14:37,043 WARNING [train.py:1182] (0/2) Exclude cut with ID 453-131332-0000-47844-0_sp0.9 from training. Duration: 25.3333125 2023-05-11 11:14:37,413 WARNING [train.py:1182] (0/2) Exclude cut with ID 5172-29468-0015-19128-0_sp0.9 from training. Duration: 21.5055625 2023-05-11 11:14:37,749 WARNING [train.py:1182] (0/2) Exclude cut with ID 453-131332-0000-47844-0_sp1.1 from training. Duration: 20.72725 2023-05-11 11:14:38,911 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0030-130156-0_sp0.9 from training. Duration: 26.32775 2023-05-11 11:14:40,556 WARNING [train.py:1182] (0/2) Exclude cut with ID 3867-173237-0077-144769-0 from training. Duration: 20.025 2023-05-11 11:14:40,733 WARNING [train.py:1182] (0/2) Exclude cut with ID 6709-74022-0004-86860-0_sp1.1 from training. Duration: 0.9409375 2023-05-11 11:14:40,740 WARNING [train.py:1182] (0/2) Exclude cut with ID 4757-1811-0023-62229-0_sp0.9 from training. Duration: 21.37775 2023-05-11 11:14:41,431 WARNING [train.py:1182] (0/2) Exclude cut with ID 1250-135782-0004-25974-0_sp0.9 from training. Duration: 21.17225 2023-05-11 11:14:41,438 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0021-15852-0_sp0.9 from training. Duration: 27.511125 2023-05-11 11:14:42,516 WARNING [train.py:1182] (0/2) Exclude cut with ID 453-131332-0000-47844-0 from training. Duration: 22.8 2023-05-11 11:14:42,671 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0040-44509-0 from training. Duration: 22.585 2023-05-11 11:14:43,721 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0001-146967-0_sp0.9 from training. Duration: 22.0166875 2023-05-11 11:14:44,857 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0003-134302-0_sp1.1 from training. Duration: 24.395375 2023-05-11 11:14:45,098 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0017-41203-0_sp0.9 from training. Duration: 27.47775 2023-05-11 11:14:45,255 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0017-62487-0_sp0.9 from training. Duration: 24.8833125 2023-05-11 11:14:45,359 WARNING [train.py:1182] (0/2) Exclude cut with ID 6758-72288-0033-108368-0 from training. Duration: 23.39 2023-05-11 11:14:45,574 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0007-12994-0_sp0.9 from training. Duration: 28.72225 2023-05-11 11:14:45,904 WARNING [train.py:1182] (0/2) Exclude cut with ID 585-294811-0110-133686-0_sp0.9 from training. Duration: 20.8944375 2023-05-11 11:14:46,404 WARNING [train.py:1182] (0/2) Exclude cut with ID 5796-66357-0007-116447-0_sp0.9 from training. Duration: 23.8444375 2023-05-11 11:14:47,229 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0007-12994-0 from training. Duration: 25.85 2023-05-11 11:14:47,235 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0023-13010-0 from training. Duration: 21.39 2023-05-11 11:14:47,601 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0014-15845-0 from training. Duration: 27.92 2023-05-11 11:14:48,543 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0039-130165-0_sp0.9 from training. Duration: 20.661125 2023-05-11 11:14:49,944 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0043-15874-0_sp0.9 from training. Duration: 20.07225 2023-05-11 11:14:50,219 WARNING [train.py:1182] (0/2) Exclude cut with ID 1085-156170-0017-128270-0 from training. Duration: 21.01 2023-05-11 11:14:52,563 WARNING [train.py:1182] (0/2) Exclude cut with ID 2195-150901-0045-59933-0 from training. Duration: 20.65 2023-05-11 11:14:52,807 WARNING [train.py:1182] (0/2) Exclude cut with ID 5796-66357-0007-116447-0 from training. Duration: 21.46 2023-05-11 11:14:54,925 WARNING [train.py:1182] (0/2) Exclude cut with ID 3557-8342-0013-54691-0 from training. Duration: 0.92 2023-05-11 11:14:55,151 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0023-13010-0_sp0.9 from training. Duration: 23.7666875 2023-05-11 11:14:56,385 WARNING [train.py:1182] (0/2) Exclude cut with ID 8544-281189-0060-101339-0_sp0.9 from training. Duration: 20.861125 2023-05-11 11:14:56,734 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-65654-0031-41259-0_sp0.9 from training. Duration: 22.711125 2023-05-11 11:14:58,766 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0043-132310-0_sp1.1 from training. Duration: 22.986375 2023-05-11 11:14:59,300 WARNING [train.py:1182] (0/2) Exclude cut with ID 8040-260924-0003-80960-0_sp0.9 from training. Duration: 22.07225 2023-05-11 11:14:59,465 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0045-26330-0_sp0.9 from training. Duration: 20.3055625 2023-05-11 11:14:59,550 WARNING [train.py:1182] (0/2) Exclude cut with ID 6356-271890-0060-94317-0_sp0.9 from training. Duration: 20.72225 2023-05-11 11:15:00,171 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0017-41203-0_sp1.1 from training. Duration: 22.4818125 2023-05-11 11:15:01,051 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0040-44509-0_sp0.9 from training. Duration: 25.0944375 2023-05-11 11:15:01,185 WARNING [train.py:1182] (0/2) Exclude cut with ID 6533-399-0047-104881-0 from training. Duration: 21.515 2023-05-11 11:15:01,402 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0009-15840-0_sp0.9 from training. Duration: 27.02225 2023-05-11 11:15:01,567 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0010-62480-0_sp0.9 from training. Duration: 22.22225 2023-05-11 11:15:01,813 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0085-44554-0_sp0.9 from training. Duration: 20.85 2023-05-11 11:15:03,503 WARNING [train.py:1182] (0/2) Exclude cut with ID 4295-39940-0007-92567-0 from training. Duration: 21.54 2023-05-11 11:15:03,648 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0040-44509-0_sp1.1 from training. Duration: 20.5318125 2023-05-11 11:15:03,975 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0012-134311-0_sp0.9 from training. Duration: 21.9333125 2023-05-11 11:15:05,647 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0025-130151-0_sp0.9 from training. Duration: 21.7944375 2023-05-11 11:15:06,068 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0002-12989-0_sp0.9 from training. Duration: 22.4666875 2023-05-11 11:15:06,321 WARNING [train.py:1182] (0/2) Exclude cut with ID 6121-9014-0076-24124-0 from training. Duration: 21.635 2023-05-11 11:15:07,092 WARNING [train.py:1182] (0/2) Exclude cut with ID 6121-9014-0076-24124-0_sp0.9 from training. Duration: 24.038875 2023-05-11 11:15:08,551 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0009-134308-0_sp1.1 from training. Duration: 21.786375 2023-05-11 11:15:08,954 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0002-12989-0 from training. Duration: 20.22 2023-05-11 11:15:13,663 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0043-132310-0 from training. Duration: 25.285 2023-05-11 11:15:16,502 WARNING [train.py:1182] (0/2) Exclude cut with ID 811-130148-0001-63453-0_sp0.9 from training. Duration: 20.861125 2023-05-11 11:15:17,254 WARNING [train.py:1182] (0/2) Exclude cut with ID 6010-56788-0055-90261-0 from training. Duration: 20.88 2023-05-11 11:15:18,339 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0045-15876-0_sp0.9 from training. Duration: 23.4166875 2023-05-11 11:15:21,526 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0006-134305-0 from training. Duration: 21.24 2023-05-11 11:15:21,536 WARNING [train.py:1182] (0/2) Exclude cut with ID 6533-399-0047-104881-0_sp0.9 from training. Duration: 23.9055625 2023-05-11 11:15:22,711 WARNING [train.py:1182] (0/2) Exclude cut with ID 6758-72288-0033-108368-0_sp0.9 from training. Duration: 25.988875 2023-05-11 11:15:23,012 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0001-134300-0_sp0.9 from training. Duration: 20.67225 2023-05-11 11:15:25,313 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0038-41224-0 from training. Duration: 20.34 2023-05-11 11:15:27,880 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0026-15857-0_sp0.9 from training. Duration: 25.061125 2023-05-11 11:15:28,287 WARNING [train.py:1182] (0/2) Exclude cut with ID 3033-130750-0096-55598-0 from training. Duration: 0.83 2023-05-11 11:15:29,751 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0017-41203-0 from training. Duration: 24.73 2023-05-11 11:15:30,178 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0009-134308-0 from training. Duration: 23.965 2023-05-11 11:15:30,470 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0030-146996-0_sp0.9 from training. Duration: 22.088875 2023-05-11 11:15:31,044 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0006-134305-0_sp0.9 from training. Duration: 23.6 2023-05-11 11:15:35,022 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0024-13011-0 from training. Duration: 23.795 2023-05-11 11:15:35,584 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0030-130156-0_sp1.1 from training. Duration: 21.5409375 2023-05-11 11:15:35,677 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0019-146985-0_sp0.9 from training. Duration: 24.97775 2023-05-11 11:15:36,050 WARNING [train.py:1182] (0/2) Exclude cut with ID 1085-156170-0017-128270-0_sp0.9 from training. Duration: 23.3444375 2023-05-11 11:15:37,009 WARNING [train.py:1182] (0/2) Exclude cut with ID 6010-56788-0055-90261-0_sp0.9 from training. Duration: 23.2 2023-05-11 11:15:37,205 WARNING [train.py:1182] (0/2) Exclude cut with ID 5653-46179-0060-117930-0_sp0.9 from training. Duration: 21.17225 2023-05-11 11:15:38,456 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0008-134307-0_sp0.9 from training. Duration: 24.6555625 2023-05-11 11:15:40,324 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-65654-0031-41259-0 from training. Duration: 20.44 2023-05-11 11:15:40,831 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0018-132285-0_sp0.9 from training. Duration: 23.45 2023-05-11 11:15:41,852 WARNING [train.py:1182] (0/2) Exclude cut with ID 6945-60535-0076-12784-0_sp0.9 from training. Duration: 20.52225 2023-05-11 11:15:42,666 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0008-134307-0 from training. Duration: 22.19 2023-05-11 11:15:43,011 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0014-15845-0_sp1.1 from training. Duration: 25.3818125 2023-05-11 11:15:43,589 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0043-132310-0_sp0.9 from training. Duration: 28.0944375 2023-05-11 11:15:43,789 WARNING [train.py:1182] (0/2) Exclude cut with ID 2195-150901-0045-59933-0_sp0.9 from training. Duration: 22.9444375 2023-05-11 11:15:44,082 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0024-13011-0_sp1.1 from training. Duration: 21.6318125 2023-05-11 11:15:44,668 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0030-130156-0 from training. Duration: 23.695 2023-05-11 11:15:45,555 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0094-26379-0 from training. Duration: 23.955 2023-05-11 11:15:47,342 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0024-13011-0_sp0.9 from training. Duration: 26.438875 2023-05-11 11:15:48,935 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0021-26306-0_sp0.9 from training. Duration: 21.2444375 2023-05-11 11:15:48,969 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0014-15845-0_sp0.9 from training. Duration: 31.02225 2023-05-11 11:15:49,342 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0017-62487-0 from training. Duration: 22.395 2023-05-11 11:15:49,954 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0045-15876-0 from training. Duration: 21.075 2023-05-11 11:15:50,133 WARNING [train.py:1182] (0/2) Exclude cut with ID 6482-98857-0025-147532-0_sp0.9 from training. Duration: 20.0055625 2023-05-11 11:15:50,148 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0037-132304-0_sp0.9 from training. Duration: 22.05 2023-05-11 11:15:50,156 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0003-134302-0 from training. Duration: 26.8349375 2023-05-11 11:15:50,272 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0009-15840-0_sp1.1 from training. Duration: 22.1090625 2023-05-11 11:15:50,509 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0094-26379-0_sp0.9 from training. Duration: 26.6166875 2023-05-11 11:15:51,765 WARNING [train.py:1182] (0/2) Exclude cut with ID 2046-178027-0000-53705-0_sp0.9 from training. Duration: 20.3055625 2023-05-11 11:15:52,577 WARNING [train.py:1182] (0/2) Exclude cut with ID 7205-50138-0008-5373-0_sp0.9 from training. Duration: 20.7 2023-05-11 11:15:54,121 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0019-146985-0 from training. Duration: 22.48 2023-05-11 11:15:54,684 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0003-134302-0_sp0.9 from training. Duration: 29.816625 2023-05-11 11:15:55,416 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0005-134304-0_sp1.1 from training. Duration: 22.7590625 2023-05-11 11:15:55,613 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0026-15857-0 from training. Duration: 22.555 2023-05-11 11:15:56,955 WARNING [train.py:1182] (0/2) Exclude cut with ID 1250-135782-0005-25975-0_sp0.9 from training. Duration: 21.688875 2023-05-11 11:15:58,203 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0038-41224-0_sp0.9 from training. Duration: 22.6 2023-05-11 11:15:59,570 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0009-15840-0 from training. Duration: 24.32 2023-05-11 11:16:02,203 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-276745-0093-13116-0_sp0.9 from training. Duration: 21.061125 2023-05-11 11:16:02,693 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0024-15855-0_sp0.9 from training. Duration: 20.32225 2023-05-11 11:16:03,143 WARNING [train.py:1182] (0/2) Exclude cut with ID 3033-130750-0096-55598-0_sp1.1 from training. Duration: 0.7545625 2023-05-11 11:16:03,661 WARNING [train.py:1182] (0/2) Exclude cut with ID 4295-39940-0007-92567-0_sp0.9 from training. Duration: 23.9333125 2023-05-11 11:16:04,786 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0008-134307-0_sp1.1 from training. Duration: 20.17275 2023-05-11 11:16:05,549 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0019-146985-0_sp1.1 from training. Duration: 20.436375 2023-05-11 11:16:08,406 WARNING [train.py:1182] (0/2) Exclude cut with ID 4234-40345-0022-142709-0_sp0.9 from training. Duration: 23.1055625 2023-05-11 11:16:08,476 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0007-12994-0_sp1.1 from training. Duration: 23.5 2023-05-11 11:16:08,853 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0009-134308-0_sp0.9 from training. Duration: 26.62775 2023-05-11 11:16:09,294 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0018-132285-0 from training. Duration: 21.105 2023-05-11 11:16:10,024 WARNING [train.py:1182] (0/2) Exclude cut with ID 4511-76322-0006-80011-0_sp0.9 from training. Duration: 24.411125 2023-05-11 11:16:10,985 WARNING [train.py:1182] (0/2) Exclude cut with ID 6758-72288-0033-108368-0_sp1.1 from training. Duration: 21.263625 2023-05-11 11:16:12,114 WARNING [train.py:1182] (0/2) Exclude cut with ID 4234-40345-0022-142709-0 from training. Duration: 20.795 2023-05-11 11:16:12,488 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0021-15852-0 from training. Duration: 24.76 2023-05-11 11:16:12,503 WARNING [train.py:1182] (0/2) Exclude cut with ID 3867-173237-0077-144769-0_sp0.9 from training. Duration: 22.25 2023-05-11 11:16:13,405 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0026-15857-0_sp1.1 from training. Duration: 20.5045625 2023-05-11 11:16:29,264 INFO [train.py:1357] (0/2) Maximum memory allocated so far is 17010MB 2023-05-11 11:16:31,775 INFO [train.py:1357] (0/2) Maximum memory allocated so far is 17953MB 2023-05-11 11:16:34,529 INFO [train.py:1357] (0/2) Maximum memory allocated so far is 17953MB 2023-05-11 11:16:37,666 INFO [train.py:1357] (0/2) Maximum memory allocated so far is 17953MB 2023-05-11 11:16:40,369 INFO [train.py:1357] (0/2) Maximum memory allocated so far is 17953MB 2023-05-11 11:16:43,012 INFO [train.py:1357] (0/2) Maximum memory allocated so far is 17953MB 2023-05-11 11:16:43,035 INFO [train.py:1238] (0/2) Loading grad scaler state dict 2023-05-11 11:16:54,892 WARNING [train.py:1182] (0/2) Exclude cut with ID 7859-102521-0017-7548-0_sp1.1 from training. Duration: 22.2954375 2023-05-11 11:16:57,889 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([6.1500, 5.3625, 5.5502, 6.0179], device='cuda:0') 2023-05-11 11:16:59,881 INFO [train.py:1021] (0/2) Epoch 36, batch 0, loss[loss=0.1617, simple_loss=0.2509, pruned_loss=0.03623, over 36902.00 frames. ], tot_loss[loss=0.1617, simple_loss=0.2509, pruned_loss=0.03623, over 36902.00 frames. ], batch size: 100, lr: 3.06e-03, grad_scale: 32.0 2023-05-11 11:16:59,882 INFO [train.py:1048] (0/2) Computing validation loss 2023-05-11 11:17:12,681 INFO [train.py:1057] (0/2) Epoch 36, validation: loss=0.1524, simple_loss=0.2528, pruned_loss=0.02596, over 944034.00 frames. 2023-05-11 11:17:12,682 INFO [train.py:1058] (0/2) Maximum memory allocated so far is 17953MB 2023-05-11 11:17:13,118 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass.skip_rate, batch_count=636310.0, ans=0.04949747468305833 2023-05-11 11:18:02,398 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.2.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 11:18:04,946 WARNING [train.py:1182] (0/2) Exclude cut with ID 298-126791-0067-24026-0_sp0.9 from training. Duration: 21.438875 2023-05-11 11:18:10,732 WARNING [train.py:1182] (0/2) Exclude cut with ID 5652-39938-0025-23684-0_sp0.9 from training. Duration: 22.2055625 2023-05-11 11:18:19,381 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.744e+02 3.267e+02 3.677e+02 4.040e+02 5.813e+02, threshold=7.353e+02, percent-clipped=0.0 2023-05-11 11:18:26,583 INFO [train.py:1021] (0/2) Epoch 36, batch 50, loss[loss=0.1434, simple_loss=0.2267, pruned_loss=0.03007, over 36859.00 frames. ], tot_loss[loss=0.1648, simple_loss=0.2575, pruned_loss=0.03601, over 1626919.61 frames. ], batch size: 84, lr: 3.06e-03, grad_scale: 16.0 2023-05-11 11:18:48,793 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass.skip_rate, batch_count=636610.0, ans=0.07 2023-05-11 11:18:58,834 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module1.balancer1.prob, batch_count=636660.0, ans=0.125 2023-05-11 11:19:15,652 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=12.20 vs. limit=22.5 2023-05-11 11:19:31,770 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([4.8617, 4.1490, 4.4200, 4.4161], device='cuda:0') 2023-05-11 11:19:38,152 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=636760.0, ans=0.1 2023-05-11 11:19:38,218 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.scale_min, batch_count=636760.0, ans=0.2 2023-05-11 11:19:40,305 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=3.93 vs. limit=6.0 2023-05-11 11:19:42,215 INFO [train.py:1021] (0/2) Epoch 36, batch 100, loss[loss=0.1531, simple_loss=0.2432, pruned_loss=0.03151, over 36866.00 frames. ], tot_loss[loss=0.1623, simple_loss=0.2541, pruned_loss=0.03526, over 2864676.60 frames. ], batch size: 96, lr: 3.06e-03, grad_scale: 16.0 2023-05-11 11:19:52,560 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=636810.0, ans=0.1 2023-05-11 11:19:52,613 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.balancer_ff3.min_abs, batch_count=636810.0, ans=0.2 2023-05-11 11:20:05,785 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=636860.0, ans=0.125 2023-05-11 11:20:07,375 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass_mid.scale_min, batch_count=636860.0, ans=0.2 2023-05-11 11:20:11,594 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer2.prob, batch_count=636910.0, ans=0.125 2023-05-11 11:20:14,500 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=636910.0, ans=0.1 2023-05-11 11:20:25,319 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.0.self_attn_weights, attn_weights_entropy = tensor([6.2809, 5.6102, 5.4315, 5.9970], device='cuda:0') 2023-05-11 11:20:28,559 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=7.05 vs. limit=15.0 2023-05-11 11:20:31,739 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=636960.0, ans=0.1 2023-05-11 11:20:37,709 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_skip_rate, batch_count=636960.0, ans=0.0 2023-05-11 11:20:40,436 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=637010.0, ans=0.125 2023-05-11 11:20:40,550 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff2_skip_rate, batch_count=637010.0, ans=0.0 2023-05-11 11:20:48,945 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.234e+02 3.036e+02 3.691e+02 4.805e+02 8.319e+02, threshold=7.382e+02, percent-clipped=2.0 2023-05-11 11:20:53,855 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=637010.0, ans=0.125 2023-05-11 11:20:56,453 INFO [train.py:1021] (0/2) Epoch 36, batch 150, loss[loss=0.1524, simple_loss=0.2456, pruned_loss=0.02957, over 37051.00 frames. ], tot_loss[loss=0.1615, simple_loss=0.2535, pruned_loss=0.03476, over 3819925.85 frames. ], batch size: 99, lr: 3.06e-03, grad_scale: 16.0 2023-05-11 11:21:15,454 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer1.min_positive, batch_count=637110.0, ans=0.025 2023-05-11 11:21:15,554 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff2_skip_rate, batch_count=637110.0, ans=0.0 2023-05-11 11:21:17,312 WARNING [train.py:1182] (0/2) Exclude cut with ID 7859-102521-0017-7548-0 from training. Duration: 24.525 2023-05-11 11:21:52,989 WARNING [train.py:1182] (0/2) Exclude cut with ID 3699-47246-0007-3408-0_sp0.9 from training. Duration: 20.26675 2023-05-11 11:21:57,556 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer2.prob, batch_count=637260.0, ans=0.125 2023-05-11 11:22:05,798 WARNING [train.py:1182] (0/2) Exclude cut with ID 7859-102521-0017-7548-0_sp0.9 from training. Duration: 27.25 2023-05-11 11:22:10,544 INFO [train.py:1021] (0/2) Epoch 36, batch 200, loss[loss=0.1645, simple_loss=0.2625, pruned_loss=0.03325, over 36935.00 frames. ], tot_loss[loss=0.1618, simple_loss=0.2543, pruned_loss=0.0346, over 4585372.71 frames. ], batch size: 108, lr: 3.06e-03, grad_scale: 16.0 2023-05-11 11:22:10,811 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.balancer_na.min_abs, batch_count=637310.0, ans=0.02 2023-05-11 11:22:38,935 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module2.balancer1.prob, batch_count=637410.0, ans=0.125 2023-05-11 11:23:17,260 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.397e+02 2.936e+02 3.259e+02 3.848e+02 7.432e+02, threshold=6.517e+02, percent-clipped=1.0 2023-05-11 11:23:17,599 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=637510.0, ans=0.1 2023-05-11 11:23:21,804 WARNING [train.py:1182] (0/2) Exclude cut with ID 6426-64292-0017-15984-0 from training. Duration: 21.68 2023-05-11 11:23:24,608 INFO [train.py:1021] (0/2) Epoch 36, batch 250, loss[loss=0.1546, simple_loss=0.2552, pruned_loss=0.02707, over 37178.00 frames. ], tot_loss[loss=0.1614, simple_loss=0.254, pruned_loss=0.03437, over 5164577.55 frames. ], batch size: 102, lr: 3.06e-03, grad_scale: 16.0 2023-05-11 11:23:24,888 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.balancer2.prob, batch_count=637560.0, ans=0.125 2023-05-11 11:23:28,123 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=1.68 vs. limit=6.0 2023-05-11 11:23:28,259 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=256, metric=8.19 vs. limit=15.0 2023-05-11 11:23:34,686 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0007-59342-0 from training. Duration: 21.6300625 2023-05-11 11:23:39,189 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=637610.0, ans=0.0 2023-05-11 11:23:40,504 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=637610.0, ans=0.0 2023-05-11 11:23:40,531 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.bypass.skip_rate, batch_count=637610.0, ans=0.07 2023-05-11 11:23:49,484 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.3715, 3.6316, 4.0197, 3.6543], device='cuda:0') 2023-05-11 11:23:58,301 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0007-59342-0_sp0.9 from training. Duration: 24.033375 2023-05-11 11:24:21,041 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=6.92 vs. limit=15.0 2023-05-11 11:24:37,753 INFO [train.py:1021] (0/2) Epoch 36, batch 300, loss[loss=0.164, simple_loss=0.2629, pruned_loss=0.0326, over 36827.00 frames. ], tot_loss[loss=0.1607, simple_loss=0.2531, pruned_loss=0.03414, over 5618633.23 frames. ], batch size: 111, lr: 3.06e-03, grad_scale: 16.0 2023-05-11 11:24:38,485 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=256, metric=8.40 vs. limit=22.5 2023-05-11 11:24:55,974 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0009-59344-0 from training. Duration: 22.905 2023-05-11 11:24:57,402 WARNING [train.py:1182] (0/2) Exclude cut with ID 5622-44585-0006-90525-0_sp1.1 from training. Duration: 23.4318125 2023-05-11 11:25:09,820 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.balancer2.prob, batch_count=637910.0, ans=0.125 2023-05-11 11:25:24,087 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.3653, 4.6904, 4.8639, 4.5528], device='cuda:0') 2023-05-11 11:25:44,712 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.519e+02 3.073e+02 3.644e+02 4.385e+02 7.936e+02, threshold=7.287e+02, percent-clipped=5.0 2023-05-11 11:25:46,409 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_skip_rate, batch_count=638010.0, ans=0.0 2023-05-11 11:25:52,003 INFO [train.py:1021] (0/2) Epoch 36, batch 350, loss[loss=0.1356, simple_loss=0.2241, pruned_loss=0.02358, over 37070.00 frames. ], tot_loss[loss=0.1603, simple_loss=0.2523, pruned_loss=0.03418, over 5959926.24 frames. ], batch size: 88, lr: 3.06e-03, grad_scale: 16.0 2023-05-11 11:26:55,394 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0009-59344-0_sp1.1 from training. Duration: 20.82275 2023-05-11 11:26:56,790 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0009-59344-0_sp0.9 from training. Duration: 25.45 2023-05-11 11:27:03,319 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=13.91 vs. limit=15.0 2023-05-11 11:27:05,510 INFO [train.py:1021] (0/2) Epoch 36, batch 400, loss[loss=0.1686, simple_loss=0.2677, pruned_loss=0.03471, over 37018.00 frames. ], tot_loss[loss=0.1611, simple_loss=0.2533, pruned_loss=0.03445, over 6233253.86 frames. ], batch size: 104, lr: 3.06e-03, grad_scale: 32.0 2023-05-11 11:27:49,290 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.attention_skip_rate, batch_count=638460.0, ans=0.0 2023-05-11 11:27:55,976 WARNING [train.py:1182] (0/2) Exclude cut with ID 5622-44585-0006-90525-0 from training. Duration: 25.775 2023-05-11 11:27:57,739 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=638460.0, ans=0.1 2023-05-11 11:28:12,284 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.474e+02 2.982e+02 3.365e+02 4.025e+02 7.025e+02, threshold=6.731e+02, percent-clipped=0.0 2023-05-11 11:28:16,660 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0071-62375-0_sp0.9 from training. Duration: 22.25 2023-05-11 11:28:19,450 INFO [train.py:1021] (0/2) Epoch 36, batch 450, loss[loss=0.1984, simple_loss=0.2848, pruned_loss=0.05604, over 25094.00 frames. ], tot_loss[loss=0.1621, simple_loss=0.2545, pruned_loss=0.03486, over 6428977.16 frames. ], batch size: 233, lr: 3.06e-03, grad_scale: 32.0 2023-05-11 11:28:27,054 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([1.9990, 2.6663, 2.4761, 2.8481, 1.7604, 2.7694, 2.9370, 2.7912], device='cuda:0') 2023-05-11 11:28:43,115 WARNING [train.py:1182] (0/2) Exclude cut with ID 3972-170212-0014-23379-0 from training. Duration: 26.205 2023-05-11 11:28:44,801 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.0404, 4.1327, 3.7896, 4.1407, 3.4901, 3.1338, 3.5231, 3.0222], device='cuda:0') 2023-05-11 11:28:58,843 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0047-9341-0_sp0.9 from training. Duration: 30.1555625 2023-05-11 11:29:05,199 WARNING [train.py:1182] (0/2) Exclude cut with ID 1265-135635-0050-6781-0_sp0.9 from training. Duration: 21.8333125 2023-05-11 11:29:15,159 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0024-94914-0_sp1.1 from training. Duration: 20.6545625 2023-05-11 11:29:15,466 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.3722, 3.0867, 4.5429, 3.5113], device='cuda:0') 2023-05-11 11:29:22,635 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.3504, 5.1767, 4.4464, 4.9196], device='cuda:0') 2023-05-11 11:29:23,430 INFO [scaling.py:969] (0/2) Whitening: name=encoder_embed.out_whiten, num_groups=1, num_channels=192, metric=7.41 vs. limit=8.0 2023-05-11 11:29:24,013 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer1.prob, batch_count=638760.0, ans=0.125 2023-05-11 11:29:27,344 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=638760.0, ans=0.1 2023-05-11 11:29:31,823 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=638810.0, ans=0.0 2023-05-11 11:29:32,992 INFO [train.py:1021] (0/2) Epoch 36, batch 500, loss[loss=0.1598, simple_loss=0.2516, pruned_loss=0.03395, over 36854.00 frames. ], tot_loss[loss=0.1622, simple_loss=0.2548, pruned_loss=0.03482, over 6617600.90 frames. ], batch size: 96, lr: 3.05e-03, grad_scale: 32.0 2023-05-11 11:29:34,817 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.out_combiner.scale_min, batch_count=638810.0, ans=0.2 2023-05-11 11:29:56,797 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0045-39920-0_sp0.9 from training. Duration: 20.52225 2023-05-11 11:30:17,716 WARNING [train.py:1182] (0/2) Exclude cut with ID 3972-170212-0014-23379-0_sp0.9 from training. Duration: 29.1166875 2023-05-11 11:30:39,270 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.602e+02 3.103e+02 3.554e+02 4.156e+02 6.805e+02, threshold=7.107e+02, percent-clipped=1.0 2023-05-11 11:30:46,977 INFO [train.py:1021] (0/2) Epoch 36, batch 550, loss[loss=0.183, simple_loss=0.2823, pruned_loss=0.0419, over 34718.00 frames. ], tot_loss[loss=0.1634, simple_loss=0.2563, pruned_loss=0.03521, over 6754295.27 frames. ], batch size: 145, lr: 3.05e-03, grad_scale: 32.0 2023-05-11 11:31:18,831 WARNING [train.py:1182] (0/2) Exclude cut with ID 543-133211-0007-59831-0_sp0.9 from training. Duration: 21.388875 2023-05-11 11:31:22,000 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.const_attention_rate, batch_count=639160.0, ans=0.025 2023-05-11 11:31:38,998 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=4.78 vs. limit=15.0 2023-05-11 11:31:51,707 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0024-94914-0 from training. Duration: 22.72 2023-05-11 11:31:53,137 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0031-94921-0_sp0.9 from training. Duration: 22.7444375 2023-05-11 11:31:54,803 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.ff3_skip_rate, batch_count=639260.0, ans=0.0 2023-05-11 11:31:54,823 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module2.balancer1.prob, batch_count=639260.0, ans=0.125 2023-05-11 11:31:58,980 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.attention_skip_rate, batch_count=639310.0, ans=0.0 2023-05-11 11:32:00,216 INFO [train.py:1021] (0/2) Epoch 36, batch 600, loss[loss=0.1777, simple_loss=0.2747, pruned_loss=0.04035, over 36340.00 frames. ], tot_loss[loss=0.1639, simple_loss=0.257, pruned_loss=0.03541, over 6828465.02 frames. ], batch size: 126, lr: 3.05e-03, grad_scale: 32.0 2023-05-11 11:32:10,987 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer1.prob, batch_count=639310.0, ans=0.125 2023-05-11 11:32:16,745 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer1.prob, batch_count=639360.0, ans=0.125 2023-05-11 11:32:16,745 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass.scale_min, batch_count=639360.0, ans=0.2 2023-05-11 11:32:24,058 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer1.max_abs, batch_count=639360.0, ans=10.0 2023-05-11 11:32:37,562 WARNING [train.py:1182] (0/2) Exclude cut with ID 4133-6541-0027-40495-0_sp1.1 from training. Duration: 0.9681875 2023-05-11 11:32:41,829 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62851-0022-91297-0_sp0.9 from training. Duration: 22.3166875 2023-05-11 11:32:47,394 WARNING [train.py:1182] (0/2) Exclude cut with ID 543-133212-0015-59917-0_sp0.9 from training. Duration: 21.8166875 2023-05-11 11:33:08,097 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.430e+02 3.080e+02 3.608e+02 4.265e+02 7.409e+02, threshold=7.216e+02, percent-clipped=1.0 2023-05-11 11:33:08,486 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.bypass.skip_rate, batch_count=639510.0, ans=0.09899494936611666 2023-05-11 11:33:13,977 INFO [train.py:1021] (0/2) Epoch 36, batch 650, loss[loss=0.1858, simple_loss=0.2811, pruned_loss=0.04527, over 35910.00 frames. ], tot_loss[loss=0.1642, simple_loss=0.2574, pruned_loss=0.03549, over 6926414.91 frames. ], batch size: 133, lr: 3.05e-03, grad_scale: 16.0 2023-05-11 11:33:17,122 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer2.prob, batch_count=639560.0, ans=0.125 2023-05-11 11:33:28,035 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module1.balancer2.prob, batch_count=639610.0, ans=0.125 2023-05-11 11:33:39,517 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer1.prob, batch_count=639610.0, ans=0.125 2023-05-11 11:34:19,805 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=5.41 vs. limit=15.0 2023-05-11 11:34:27,887 INFO [train.py:1021] (0/2) Epoch 36, batch 700, loss[loss=0.1851, simple_loss=0.2807, pruned_loss=0.04473, over 36344.00 frames. ], tot_loss[loss=0.1643, simple_loss=0.2572, pruned_loss=0.03566, over 6972116.85 frames. ], batch size: 126, lr: 3.05e-03, grad_scale: 16.0 2023-05-11 11:34:32,216 WARNING [train.py:1182] (0/2) Exclude cut with ID 4957-30119-0041-23990-0_sp0.9 from training. Duration: 20.22775 2023-05-11 11:34:32,418 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass.skip_rate, batch_count=639810.0, ans=0.07 2023-05-11 11:34:35,901 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=4.98 vs. limit=15.0 2023-05-11 11:34:52,307 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.whiten, num_groups=1, num_channels=192, metric=3.88 vs. limit=12.0 2023-05-11 11:34:53,146 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=639860.0, ans=0.1 2023-05-11 11:34:54,470 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=639860.0, ans=0.1 2023-05-11 11:35:03,304 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=639910.0, ans=0.125 2023-05-11 11:35:16,743 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0047-9341-0_sp1.1 from training. Duration: 24.67275 2023-05-11 11:35:22,605 INFO [checkpoint.py:75] (0/2) Saving checkpoint to pruned_transducer_stateless7/exp1119-smaller-md1500/checkpoint-128000.pt 2023-05-11 11:35:37,187 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.473e+02 3.120e+02 3.379e+02 3.910e+02 7.699e+02, threshold=6.757e+02, percent-clipped=2.0 2023-05-11 11:35:37,503 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff2_skip_rate, batch_count=640010.0, ans=0.0 2023-05-11 11:35:43,216 INFO [train.py:1021] (0/2) Epoch 36, batch 750, loss[loss=0.1637, simple_loss=0.2589, pruned_loss=0.03421, over 37184.00 frames. ], tot_loss[loss=0.1641, simple_loss=0.2571, pruned_loss=0.03556, over 7027024.35 frames. ], batch size: 102, lr: 3.05e-03, grad_scale: 16.0 2023-05-11 11:35:47,650 WARNING [train.py:1182] (0/2) Exclude cut with ID 3082-165428-0081-50734-0_sp0.9 from training. Duration: 21.8055625 2023-05-11 11:35:52,479 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.whiten, num_groups=1, num_channels=256, metric=3.55 vs. limit=12.0 2023-05-11 11:36:13,479 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.whiten, num_groups=1, num_channels=256, metric=4.00 vs. limit=12.0 2023-05-11 11:36:26,976 WARNING [train.py:1182] (0/2) Exclude cut with ID 3340-169293-0054-76830-0_sp0.9 from training. Duration: 22.6666875 2023-05-11 11:36:56,526 INFO [train.py:1021] (0/2) Epoch 36, batch 800, loss[loss=0.1662, simple_loss=0.2636, pruned_loss=0.03442, over 37175.00 frames. ], tot_loss[loss=0.1638, simple_loss=0.2571, pruned_loss=0.0353, over 7080073.39 frames. ], batch size: 102, lr: 3.05e-03, grad_scale: 32.0 2023-05-11 11:37:04,801 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer1.prob, batch_count=640310.0, ans=0.125 2023-05-11 11:37:24,415 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.const_attention_rate, batch_count=640410.0, ans=0.025 2023-05-11 11:37:28,306 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=4.26 vs. limit=15.0 2023-05-11 11:37:29,213 WARNING [train.py:1182] (0/2) Exclude cut with ID 2411-132532-0017-82279-0_sp1.1 from training. Duration: 0.9681875 2023-05-11 11:37:36,563 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.ff3_skip_rate, batch_count=640410.0, ans=0.0 2023-05-11 11:37:56,968 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62850-0007-91323-0 from training. Duration: 22.485 2023-05-11 11:38:00,018 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.bypass_mid.scale_min, batch_count=640510.0, ans=0.2 2023-05-11 11:38:03,984 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.632e+02 3.058e+02 3.476e+02 4.105e+02 6.399e+02, threshold=6.952e+02, percent-clipped=0.0 2023-05-11 11:38:09,913 INFO [train.py:1021] (0/2) Epoch 36, batch 850, loss[loss=0.1494, simple_loss=0.2415, pruned_loss=0.02871, over 37030.00 frames. ], tot_loss[loss=0.1633, simple_loss=0.2567, pruned_loss=0.03496, over 7124489.15 frames. ], batch size: 99, lr: 3.05e-03, grad_scale: 32.0 2023-05-11 11:38:17,697 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass.scale_min, batch_count=640560.0, ans=0.2 2023-05-11 11:38:26,735 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.attention_skip_rate, batch_count=640610.0, ans=0.0 2023-05-11 11:38:35,304 WARNING [train.py:1182] (0/2) Exclude cut with ID 3972-170212-0014-23379-0_sp1.1 from training. Duration: 23.82275 2023-05-11 11:38:51,822 WARNING [train.py:1182] (0/2) Exclude cut with ID 4860-13185-0032-76709-0 from training. Duration: 20.77 2023-05-11 11:38:59,167 WARNING [train.py:1182] (0/2) Exclude cut with ID 6426-64292-0017-15984-0_sp0.9 from training. Duration: 24.088875 2023-05-11 11:39:05,019 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=640710.0, ans=0.0 2023-05-11 11:39:23,954 INFO [train.py:1021] (0/2) Epoch 36, batch 900, loss[loss=0.1612, simple_loss=0.2525, pruned_loss=0.035, over 37072.00 frames. ], tot_loss[loss=0.1632, simple_loss=0.2566, pruned_loss=0.03489, over 7152603.64 frames. ], batch size: 94, lr: 3.05e-03, grad_scale: 32.0 2023-05-11 11:39:24,355 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.1719, 4.5708, 3.1739, 3.1743], device='cuda:0') 2023-05-11 11:39:29,771 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62850-0007-91323-0_sp1.1 from training. Duration: 20.4409375 2023-05-11 11:39:31,625 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=256, metric=7.93 vs. limit=22.5 2023-05-11 11:40:14,490 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=10.51 vs. limit=15.0 2023-05-11 11:40:30,788 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_skip_rate, batch_count=641010.0, ans=0.0 2023-05-11 11:40:31,922 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.492e+02 2.978e+02 3.473e+02 4.210e+02 8.136e+02, threshold=6.945e+02, percent-clipped=1.0 2023-05-11 11:40:37,657 INFO [train.py:1021] (0/2) Epoch 36, batch 950, loss[loss=0.162, simple_loss=0.2591, pruned_loss=0.03246, over 37170.00 frames. ], tot_loss[loss=0.1634, simple_loss=0.2569, pruned_loss=0.03502, over 7150825.06 frames. ], batch size: 102, lr: 3.05e-03, grad_scale: 32.0 2023-05-11 11:40:39,436 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=641060.0, ans=0.1 2023-05-11 11:40:44,961 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0062-62366-0_sp0.9 from training. Duration: 22.511125 2023-05-11 11:40:46,290 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0031-39906-0 from training. Duration: 20.675 2023-05-11 11:40:51,222 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer2.prob, batch_count=641110.0, ans=0.125 2023-05-11 11:41:00,307 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.const_attention_rate, batch_count=641110.0, ans=0.025 2023-05-11 11:41:23,629 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module1.balancer1.max_abs, batch_count=641210.0, ans=10.0 2023-05-11 11:41:36,776 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.out_combiner.scale_min, batch_count=641260.0, ans=0.2 2023-05-11 11:41:46,677 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module2.balancer2.min_positive, batch_count=641260.0, ans=0.05 2023-05-11 11:41:50,674 INFO [train.py:1021] (0/2) Epoch 36, batch 1000, loss[loss=0.1672, simple_loss=0.2642, pruned_loss=0.03515, over 37072.00 frames. ], tot_loss[loss=0.1638, simple_loss=0.2571, pruned_loss=0.03521, over 7161718.76 frames. ], batch size: 103, lr: 3.05e-03, grad_scale: 32.0 2023-05-11 11:42:24,369 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=641410.0, ans=0.1 2023-05-11 11:42:25,666 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62850-0007-91323-0_sp0.9 from training. Duration: 24.9833125 2023-05-11 11:42:32,302 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=1.85 vs. limit=6.0 2023-05-11 11:42:38,030 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=5.94 vs. limit=15.0 2023-05-11 11:42:56,935 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0047-9341-0 from training. Duration: 27.14 2023-05-11 11:42:59,626 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.563e+02 3.121e+02 3.610e+02 4.306e+02 7.195e+02, threshold=7.221e+02, percent-clipped=2.0 2023-05-11 11:43:05,883 INFO [train.py:1021] (0/2) Epoch 36, batch 1050, loss[loss=0.1799, simple_loss=0.2725, pruned_loss=0.04366, over 37036.00 frames. ], tot_loss[loss=0.164, simple_loss=0.2575, pruned_loss=0.03527, over 7166997.43 frames. ], batch size: 116, lr: 3.05e-03, grad_scale: 32.0 2023-05-11 11:43:14,598 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0044-62348-0 from training. Duration: 22.44 2023-05-11 11:43:34,168 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 11:43:54,342 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=256, metric=13.51 vs. limit=22.5 2023-05-11 11:43:55,226 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=641710.0, ans=0.125 2023-05-11 11:44:01,826 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=5.40 vs. limit=15.0 2023-05-11 11:44:11,757 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([4.3628, 3.7420, 3.9722, 3.7791], device='cuda:0') 2023-05-11 11:44:18,902 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=641810.0, ans=0.125 2023-05-11 11:44:19,937 INFO [train.py:1021] (0/2) Epoch 36, batch 1100, loss[loss=0.1658, simple_loss=0.2598, pruned_loss=0.0359, over 37006.00 frames. ], tot_loss[loss=0.1634, simple_loss=0.2567, pruned_loss=0.03502, over 7168997.05 frames. ], batch size: 104, lr: 3.05e-03, grad_scale: 32.0 2023-05-11 11:44:20,300 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer1.prob, batch_count=641810.0, ans=0.125 2023-05-11 11:44:30,704 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass.scale_min, batch_count=641810.0, ans=0.2 2023-05-11 11:44:31,244 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=256, metric=16.13 vs. limit=22.5 2023-05-11 11:44:33,464 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0060-62364-0_sp0.9 from training. Duration: 21.361125 2023-05-11 11:44:41,325 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0079-62383-0_sp1.1 from training. Duration: 27.0318125 2023-05-11 11:44:41,537 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_skip_rate, batch_count=641860.0, ans=0.0 2023-05-11 11:44:51,681 WARNING [train.py:1182] (0/2) Exclude cut with ID 5622-44585-0006-90525-0_sp0.9 from training. Duration: 28.638875 2023-05-11 11:44:51,962 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_skip_rate, batch_count=641910.0, ans=0.0 2023-05-11 11:45:00,101 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=1.79 vs. limit=6.0 2023-05-11 11:45:08,224 WARNING [train.py:1182] (0/2) Exclude cut with ID 3340-169293-0054-76830-0 from training. Duration: 20.4 2023-05-11 11:45:09,967 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=641960.0, ans=0.125 2023-05-11 11:45:27,571 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=642010.0, ans=0.0 2023-05-11 11:45:28,655 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.603e+02 3.184e+02 3.758e+02 4.751e+02 8.609e+02, threshold=7.517e+02, percent-clipped=2.0 2023-05-11 11:45:29,726 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.4441, 4.1186, 3.7702, 4.1067, 3.4547, 3.1620, 3.5508, 3.0937], device='cuda:0') 2023-05-11 11:45:34,781 INFO [train.py:1021] (0/2) Epoch 36, batch 1150, loss[loss=0.1944, simple_loss=0.2838, pruned_loss=0.05255, over 25717.00 frames. ], tot_loss[loss=0.1627, simple_loss=0.2558, pruned_loss=0.03476, over 7188143.20 frames. ], batch size: 235, lr: 3.05e-03, grad_scale: 32.0 2023-05-11 11:45:39,077 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0071-62375-0 from training. Duration: 20.025 2023-05-11 11:45:40,483 WARNING [train.py:1182] (0/2) Exclude cut with ID 2364-131735-0112-64612-0_sp0.9 from training. Duration: 20.488875 2023-05-11 11:45:43,682 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass_mid.scale_min, batch_count=642060.0, ans=0.2 2023-05-11 11:45:46,974 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0079-62383-0 from training. Duration: 29.735 2023-05-11 11:45:47,251 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.balancer2.prob, batch_count=642060.0, ans=0.125 2023-05-11 11:46:49,706 INFO [train.py:1021] (0/2) Epoch 36, batch 1200, loss[loss=0.1657, simple_loss=0.2626, pruned_loss=0.03446, over 37067.00 frames. ], tot_loss[loss=0.1635, simple_loss=0.2567, pruned_loss=0.03512, over 7178553.12 frames. ], batch size: 116, lr: 3.05e-03, grad_scale: 32.0 2023-05-11 11:46:57,263 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.2417, 5.0835, 4.3604, 4.8206], device='cuda:0') 2023-05-11 11:47:05,787 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.attention_skip_rate, batch_count=642360.0, ans=0.0 2023-05-11 11:47:09,873 WARNING [train.py:1182] (0/2) Exclude cut with ID 7276-92427-0014-12983-0_sp0.9 from training. Duration: 21.3055625 2023-05-11 11:47:11,247 WARNING [train.py:1182] (0/2) Exclude cut with ID 1025-75365-0008-79168-0_sp0.9 from training. Duration: 22.0666875 2023-05-11 11:47:14,275 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.hidden_balancer.prob, batch_count=642360.0, ans=0.125 2023-05-11 11:47:57,173 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.491e+02 3.085e+02 3.686e+02 4.442e+02 6.743e+02, threshold=7.373e+02, percent-clipped=0.0 2023-05-11 11:48:02,925 INFO [train.py:1021] (0/2) Epoch 36, batch 1250, loss[loss=0.1712, simple_loss=0.2716, pruned_loss=0.03537, over 36706.00 frames. ], tot_loss[loss=0.1628, simple_loss=0.2561, pruned_loss=0.03471, over 7218357.31 frames. ], batch size: 122, lr: 3.05e-03, grad_scale: 32.0 2023-05-11 11:48:24,992 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.3670, 4.2569, 2.1824, 2.4712], device='cuda:0') 2023-05-11 11:48:42,638 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.balancer.max_positive, batch_count=642660.0, ans=0.95 2023-05-11 11:48:46,782 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=642710.0, ans=0.1 2023-05-11 11:48:59,345 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0062-62366-0 from training. Duration: 20.26 2023-05-11 11:49:12,351 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=192, metric=4.11 vs. limit=15.0 2023-05-11 11:49:15,002 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0030-9324-0_sp0.9 from training. Duration: 21.3444375 2023-05-11 11:49:17,783 INFO [train.py:1021] (0/2) Epoch 36, batch 1300, loss[loss=0.1762, simple_loss=0.273, pruned_loss=0.03975, over 36882.00 frames. ], tot_loss[loss=0.1632, simple_loss=0.2566, pruned_loss=0.03497, over 7188004.33 frames. ], batch size: 105, lr: 3.04e-03, grad_scale: 16.0 2023-05-11 11:49:18,087 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer1.prob, batch_count=642810.0, ans=0.125 2023-05-11 11:49:31,103 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.nonlin_attention.balancer.prob, batch_count=642860.0, ans=0.125 2023-05-11 11:49:51,630 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module1.balancer1.prob, batch_count=642910.0, ans=0.125 2023-05-11 11:50:14,540 WARNING [train.py:1182] (0/2) Exclude cut with ID 497-129325-0061-62254-0_sp1.1 from training. Duration: 0.97725 2023-05-11 11:50:19,214 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass.skip_rate, batch_count=643010.0, ans=0.035 2023-05-11 11:50:27,678 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.326e+02 3.167e+02 3.630e+02 4.374e+02 6.422e+02, threshold=7.260e+02, percent-clipped=0.0 2023-05-11 11:50:32,064 INFO [train.py:1021] (0/2) Epoch 36, batch 1350, loss[loss=0.1605, simple_loss=0.2615, pruned_loss=0.02975, over 37088.00 frames. ], tot_loss[loss=0.1631, simple_loss=0.2562, pruned_loss=0.03494, over 7202674.49 frames. ], batch size: 110, lr: 3.04e-03, grad_scale: 16.0 2023-05-11 11:50:56,164 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0031-39906-0_sp0.9 from training. Duration: 22.97225 2023-05-11 11:50:57,983 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.4505, 3.6304, 4.0378, 3.7593], device='cuda:0') 2023-05-11 11:51:02,611 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=643160.0, ans=0.1 2023-05-11 11:51:25,622 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.0.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 11:51:28,266 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0047-39922-0_sp0.9 from training. Duration: 21.97775 2023-05-11 11:51:43,219 WARNING [train.py:1182] (0/2) Exclude cut with ID 1112-1043-0006-89194-0_sp0.9 from training. Duration: 21.8333125 2023-05-11 11:51:46,158 INFO [train.py:1021] (0/2) Epoch 36, batch 1400, loss[loss=0.1851, simple_loss=0.2806, pruned_loss=0.04477, over 32368.00 frames. ], tot_loss[loss=0.1634, simple_loss=0.2568, pruned_loss=0.03503, over 7173697.31 frames. ], batch size: 170, lr: 3.04e-03, grad_scale: 16.0 2023-05-11 11:51:53,423 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0031-94921-0 from training. Duration: 20.47 2023-05-11 11:52:01,654 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.balancer2.prob, batch_count=643360.0, ans=0.125 2023-05-11 11:52:41,249 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_skip_rate, batch_count=643460.0, ans=0.0 2023-05-11 11:52:44,191 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=643510.0, ans=0.1 2023-05-11 11:52:48,425 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder_embed.dropout.p, batch_count=643510.0, ans=0.1 2023-05-11 11:52:56,309 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.508e+02 3.028e+02 3.390e+02 3.842e+02 5.713e+02, threshold=6.780e+02, percent-clipped=0.0 2023-05-11 11:53:00,580 INFO [train.py:1021] (0/2) Epoch 36, batch 1450, loss[loss=0.1512, simple_loss=0.2443, pruned_loss=0.02907, over 36844.00 frames. ], tot_loss[loss=0.1625, simple_loss=0.2557, pruned_loss=0.03465, over 7193116.65 frames. ], batch size: 96, lr: 3.04e-03, grad_scale: 16.0 2023-05-11 11:53:00,862 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_skip_rate, batch_count=643560.0, ans=0.0 2023-05-11 11:53:02,118 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0037-39912-0_sp0.9 from training. Duration: 20.67225 2023-05-11 11:53:03,744 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer2.min_positive, batch_count=643560.0, ans=0.05 2023-05-11 11:53:03,846 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer2.prob, batch_count=643560.0, ans=0.125 2023-05-11 11:53:12,473 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=643560.0, ans=0.1 2023-05-11 11:53:23,902 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0024-94914-0_sp0.9 from training. Duration: 25.2444375 2023-05-11 11:53:48,189 WARNING [train.py:1182] (0/2) Exclude cut with ID 3340-169293-0021-76797-0_sp0.9 from training. Duration: 21.1445 2023-05-11 11:53:54,287 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=643710.0, ans=0.125 2023-05-11 11:54:11,653 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=643760.0, ans=0.0 2023-05-11 11:54:14,256 INFO [train.py:1021] (0/2) Epoch 36, batch 1500, loss[loss=0.1655, simple_loss=0.262, pruned_loss=0.03455, over 36808.00 frames. ], tot_loss[loss=0.1626, simple_loss=0.256, pruned_loss=0.03464, over 7192300.28 frames. ], batch size: 113, lr: 3.04e-03, grad_scale: 16.0 2023-05-11 11:54:15,925 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.ff2_skip_rate, batch_count=643810.0, ans=0.0 2023-05-11 11:54:43,338 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=643910.0, ans=0.1 2023-05-11 11:55:01,975 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0079-62383-0_sp0.9 from training. Duration: 33.038875 2023-05-11 11:55:24,123 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.414e+02 3.005e+02 3.529e+02 4.227e+02 7.455e+02, threshold=7.057e+02, percent-clipped=2.0 2023-05-11 11:55:27,356 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_skip_rate, batch_count=644060.0, ans=0.0 2023-05-11 11:55:28,461 INFO [train.py:1021] (0/2) Epoch 36, batch 1550, loss[loss=0.1708, simple_loss=0.2703, pruned_loss=0.03565, over 36922.00 frames. ], tot_loss[loss=0.1627, simple_loss=0.256, pruned_loss=0.03466, over 7205038.28 frames. ], batch size: 108, lr: 3.04e-03, grad_scale: 16.0 2023-05-11 11:55:41,914 WARNING [train.py:1182] (0/2) Exclude cut with ID 6426-64291-0000-16059-0_sp0.9 from training. Duration: 20.0944375 2023-05-11 11:55:42,111 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.const_attention_rate, batch_count=644110.0, ans=0.025 2023-05-11 11:55:57,770 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0044-62348-0_sp1.1 from training. Duration: 20.4 2023-05-11 11:56:06,467 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62851-0022-91297-0 from training. Duration: 20.085 2023-05-11 11:56:14,492 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer1.prob, batch_count=644210.0, ans=0.125 2023-05-11 11:56:14,623 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module1.balancer1.prob, batch_count=644210.0, ans=0.125 2023-05-11 11:56:14,884 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=3.64 vs. limit=15.0 2023-05-11 11:56:17,200 WARNING [train.py:1182] (0/2) Exclude cut with ID 4860-13185-0032-76709-0_sp0.9 from training. Duration: 23.07775 2023-05-11 11:56:30,918 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=644260.0, ans=0.1 2023-05-11 11:56:41,820 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.self_attn2.whiten, num_groups=1, num_channels=192, metric=10.69 vs. limit=22.5 2023-05-11 11:56:42,162 INFO [train.py:1021] (0/2) Epoch 36, batch 1600, loss[loss=0.1431, simple_loss=0.2363, pruned_loss=0.02501, over 36912.00 frames. ], tot_loss[loss=0.1614, simple_loss=0.2546, pruned_loss=0.03407, over 7243480.59 frames. ], batch size: 100, lr: 3.04e-03, grad_scale: 32.0 2023-05-11 11:56:52,838 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.bypass_mid.scale_min, batch_count=644310.0, ans=0.2 2023-05-11 11:56:56,017 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer1.max_abs, batch_count=644360.0, ans=10.0 2023-05-11 11:57:01,268 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0044-62348-0_sp0.9 from training. Duration: 24.9333125 2023-05-11 11:57:03,350 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=9.05 vs. limit=15.0 2023-05-11 11:57:10,754 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=644410.0, ans=0.1 2023-05-11 11:57:16,985 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=644410.0, ans=0.125 2023-05-11 11:57:36,057 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.5766, 2.9682, 2.8102, 3.4176, 1.9385, 3.1152, 3.5313, 3.1250], device='cuda:0') 2023-05-11 11:57:41,877 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff3_skip_rate, batch_count=644510.0, ans=0.0 2023-05-11 11:57:45,978 WARNING [train.py:1182] (0/2) Exclude cut with ID 5118-111612-0016-124680-0_sp0.9 from training. Duration: 20.388875 2023-05-11 11:57:51,115 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.whiten1.whitening_limit, batch_count=644510.0, ans=10.0 2023-05-11 11:57:51,842 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.582e+02 3.150e+02 3.755e+02 4.699e+02 8.913e+02, threshold=7.510e+02, percent-clipped=4.0 2023-05-11 11:57:51,950 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0017-62487-0_sp1.1 from training. Duration: 20.3590625 2023-05-11 11:57:55,130 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=644560.0, ans=0.1 2023-05-11 11:57:56,238 INFO [train.py:1021] (0/2) Epoch 36, batch 1650, loss[loss=0.1744, simple_loss=0.2749, pruned_loss=0.03699, over 34782.00 frames. ], tot_loss[loss=0.1625, simple_loss=0.2559, pruned_loss=0.03453, over 7195583.50 frames. ], batch size: 145, lr: 3.04e-03, grad_scale: 32.0 2023-05-11 11:58:04,455 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 11:58:09,532 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=3.00 vs. limit=10.0 2023-05-11 11:58:22,593 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=4.89 vs. limit=15.0 2023-05-11 11:58:33,800 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=644660.0, ans=0.125 2023-05-11 11:58:46,673 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=644710.0, ans=0.1 2023-05-11 11:59:00,635 WARNING [train.py:1182] (0/2) Exclude cut with ID 3557-8342-0013-54691-0_sp1.1 from training. Duration: 0.836375 2023-05-11 11:59:10,803 INFO [train.py:1021] (0/2) Epoch 36, batch 1700, loss[loss=0.1808, simple_loss=0.2778, pruned_loss=0.04188, over 36812.00 frames. ], tot_loss[loss=0.1633, simple_loss=0.2565, pruned_loss=0.035, over 7215684.03 frames. ], batch size: 113, lr: 3.04e-03, grad_scale: 32.0 2023-05-11 11:59:27,108 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff2_skip_rate, batch_count=644860.0, ans=0.0 2023-05-11 11:59:44,818 WARNING [train.py:1182] (0/2) Exclude cut with ID 8565-290391-0049-67394-0_sp0.9 from training. Duration: 21.3166875 2023-05-11 11:59:45,113 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([4.8951, 4.0142, 4.4108, 4.4895], device='cuda:0') 2023-05-11 12:00:06,074 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.out_combiner.scale_min, batch_count=644960.0, ans=0.2 2023-05-11 12:00:16,102 WARNING [train.py:1182] (0/2) Exclude cut with ID 6533-399-0029-104863-0_sp0.9 from training. Duration: 22.1055625 2023-05-11 12:00:20,103 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.462e+02 3.085e+02 3.311e+02 3.796e+02 6.976e+02, threshold=6.622e+02, percent-clipped=0.0 2023-05-11 12:00:24,397 INFO [train.py:1021] (0/2) Epoch 36, batch 1750, loss[loss=0.1793, simple_loss=0.2726, pruned_loss=0.04297, over 24630.00 frames. ], tot_loss[loss=0.1641, simple_loss=0.2566, pruned_loss=0.03574, over 7207554.03 frames. ], batch size: 234, lr: 3.04e-03, grad_scale: 32.0 2023-05-11 12:00:27,415 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0094-26379-0_sp1.1 from training. Duration: 21.77725 2023-05-11 12:00:27,731 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass.skip_rate, batch_count=645060.0, ans=0.04949747468305833 2023-05-11 12:00:30,578 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer2.prob, batch_count=645060.0, ans=0.125 2023-05-11 12:00:30,652 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module1.balancer1.prob, batch_count=645060.0, ans=0.125 2023-05-11 12:00:47,444 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0005-134304-0_sp0.9 from training. Duration: 27.8166875 2023-05-11 12:01:12,072 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0021-15852-0_sp1.1 from training. Duration: 22.5090625 2023-05-11 12:01:19,420 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0005-134304-0 from training. Duration: 25.035 2023-05-11 12:01:24,279 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=645260.0, ans=0.0 2023-05-11 12:01:38,280 WARNING [train.py:1182] (0/2) Exclude cut with ID 774-127930-0014-10412-0_sp1.1 from training. Duration: 0.95 2023-05-11 12:01:39,695 INFO [train.py:1021] (0/2) Epoch 36, batch 1800, loss[loss=0.1615, simple_loss=0.2476, pruned_loss=0.03773, over 37044.00 frames. ], tot_loss[loss=0.1651, simple_loss=0.2569, pruned_loss=0.03664, over 7184321.57 frames. ], batch size: 99, lr: 3.04e-03, grad_scale: 32.0 2023-05-11 12:01:55,435 WARNING [train.py:1182] (0/2) Exclude cut with ID 3033-130750-0096-55598-0_sp0.9 from training. Duration: 0.92225 2023-05-11 12:02:06,027 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=3.23 vs. limit=10.0 2023-05-11 12:02:20,524 WARNING [train.py:1182] (0/2) Exclude cut with ID 4511-76322-0006-80011-0 from training. Duration: 21.97 2023-05-11 12:02:26,801 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_skip_rate, batch_count=645460.0, ans=0.0 2023-05-11 12:02:38,218 WARNING [train.py:1182] (0/2) Exclude cut with ID 7492-105653-0055-62765-0_sp0.9 from training. Duration: 21.97225 2023-05-11 12:02:39,613 WARNING [train.py:1182] (0/2) Exclude cut with ID 453-131332-0000-47844-0_sp0.9 from training. Duration: 25.3333125 2023-05-11 12:02:48,285 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.815e+02 3.468e+02 3.842e+02 4.577e+02 7.505e+02, threshold=7.684e+02, percent-clipped=2.0 2023-05-11 12:02:51,328 WARNING [train.py:1182] (0/2) Exclude cut with ID 5172-29468-0015-19128-0_sp0.9 from training. Duration: 21.5055625 2023-05-11 12:02:52,793 INFO [train.py:1021] (0/2) Epoch 36, batch 1850, loss[loss=0.1689, simple_loss=0.2608, pruned_loss=0.03853, over 37028.00 frames. ], tot_loss[loss=0.1663, simple_loss=0.2577, pruned_loss=0.0375, over 7180310.38 frames. ], batch size: 104, lr: 3.04e-03, grad_scale: 32.0 2023-05-11 12:03:01,620 WARNING [train.py:1182] (0/2) Exclude cut with ID 453-131332-0000-47844-0_sp1.1 from training. Duration: 20.72725 2023-05-11 12:03:36,303 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0030-130156-0_sp0.9 from training. Duration: 26.32775 2023-05-11 12:03:45,324 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=645710.0, ans=0.1 2023-05-11 12:03:46,743 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.0.self_attn_weights, attn_weights_entropy = tensor([6.3626, 5.6832, 5.4909, 6.1105], device='cuda:0') 2023-05-11 12:03:55,468 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.bypass.skip_rate, batch_count=645760.0, ans=0.04949747468305833 2023-05-11 12:04:07,490 INFO [train.py:1021] (0/2) Epoch 36, batch 1900, loss[loss=0.1659, simple_loss=0.2612, pruned_loss=0.0353, over 37094.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2578, pruned_loss=0.0384, over 7179597.87 frames. ], batch size: 107, lr: 3.04e-03, grad_scale: 32.0 2023-05-11 12:04:07,602 WARNING [train.py:1182] (0/2) Exclude cut with ID 3867-173237-0077-144769-0 from training. Duration: 20.025 2023-05-11 12:04:13,885 WARNING [train.py:1182] (0/2) Exclude cut with ID 6709-74022-0004-86860-0_sp1.1 from training. Duration: 0.9409375 2023-05-11 12:04:14,512 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=11.46 vs. limit=15.0 2023-05-11 12:04:15,354 WARNING [train.py:1182] (0/2) Exclude cut with ID 4757-1811-0023-62229-0_sp0.9 from training. Duration: 21.37775 2023-05-11 12:04:25,939 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=645860.0, ans=0.1 2023-05-11 12:04:35,806 WARNING [train.py:1182] (0/2) Exclude cut with ID 1250-135782-0004-25974-0_sp0.9 from training. Duration: 21.17225 2023-05-11 12:04:35,828 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0021-15852-0_sp0.9 from training. Duration: 27.511125 2023-05-11 12:04:44,027 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=5.52 vs. limit=15.0 2023-05-11 12:04:48,153 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=30.33 vs. limit=15.0 2023-05-11 12:04:53,513 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([1.9221, 4.0675, 3.7396, 4.0464, 3.4545, 3.0928, 3.4815, 3.0592], device='cuda:0') 2023-05-11 12:05:10,174 WARNING [train.py:1182] (0/2) Exclude cut with ID 453-131332-0000-47844-0 from training. Duration: 22.8 2023-05-11 12:05:14,613 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0040-44509-0 from training. Duration: 22.585 2023-05-11 12:05:17,379 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.958e+02 3.618e+02 4.227e+02 5.204e+02 8.120e+02, threshold=8.454e+02, percent-clipped=2.0 2023-05-11 12:05:21,742 INFO [train.py:1021] (0/2) Epoch 36, batch 1950, loss[loss=0.1704, simple_loss=0.2607, pruned_loss=0.04007, over 37130.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2576, pruned_loss=0.0389, over 7197123.10 frames. ], batch size: 107, lr: 3.04e-03, grad_scale: 32.0 2023-05-11 12:05:32,479 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.whiten2.whitening_limit, batch_count=646060.0, ans=15.0 2023-05-11 12:05:43,213 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0001-146967-0_sp0.9 from training. Duration: 22.0166875 2023-05-11 12:06:01,654 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0003-134302-0_sp1.1 from training. Duration: 24.395375 2023-05-11 12:06:08,876 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0017-41203-0_sp0.9 from training. Duration: 27.47775 2023-05-11 12:06:13,239 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0017-62487-0_sp0.9 from training. Duration: 24.8833125 2023-05-11 12:06:14,898 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=646210.0, ans=0.125 2023-05-11 12:06:16,085 WARNING [train.py:1182] (0/2) Exclude cut with ID 6758-72288-0033-108368-0 from training. Duration: 23.39 2023-05-11 12:06:20,501 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0007-12994-0_sp0.9 from training. Duration: 28.72225 2023-05-11 12:06:20,797 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.balancer_ff2.min_abs, batch_count=646260.0, ans=0.1 2023-05-11 12:06:28,048 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=7.29 vs. limit=15.0 2023-05-11 12:06:31,773 WARNING [train.py:1182] (0/2) Exclude cut with ID 585-294811-0110-133686-0_sp0.9 from training. Duration: 20.8944375 2023-05-11 12:06:34,612 INFO [train.py:1021] (0/2) Epoch 36, batch 2000, loss[loss=0.1438, simple_loss=0.2258, pruned_loss=0.03088, over 36755.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2575, pruned_loss=0.03916, over 7215238.90 frames. ], batch size: 89, lr: 3.04e-03, grad_scale: 32.0 2023-05-11 12:06:42,240 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_skip_rate, batch_count=646310.0, ans=0.0 2023-05-11 12:06:45,796 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.6269, 4.9467, 5.1111, 4.8214], device='cuda:0') 2023-05-11 12:06:49,039 WARNING [train.py:1182] (0/2) Exclude cut with ID 5796-66357-0007-116447-0_sp0.9 from training. Duration: 23.8444375 2023-05-11 12:06:52,349 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.9843, 4.1732, 4.6423, 4.9025], device='cuda:0') 2023-05-11 12:07:01,470 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.self_attn2.whiten, num_groups=1, num_channels=192, metric=21.14 vs. limit=22.5 2023-05-11 12:07:02,368 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer1.prob, batch_count=646360.0, ans=0.125 2023-05-11 12:07:06,949 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.9323, 3.3102, 3.7989, 3.6387], device='cuda:0') 2023-05-11 12:07:10,828 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0007-12994-0 from training. Duration: 25.85 2023-05-11 12:07:10,835 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0023-13010-0 from training. Duration: 21.39 2023-05-11 12:07:15,527 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward2.hidden_balancer.prob, batch_count=646410.0, ans=0.125 2023-05-11 12:07:18,428 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer1.prob, batch_count=646460.0, ans=0.125 2023-05-11 12:07:21,001 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0014-15845-0 from training. Duration: 27.92 2023-05-11 12:07:36,380 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.ff3_skip_rate, batch_count=646510.0, ans=0.0 2023-05-11 12:07:44,954 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.963e+02 3.522e+02 3.903e+02 4.549e+02 7.575e+02, threshold=7.805e+02, percent-clipped=0.0 2023-05-11 12:07:49,301 INFO [train.py:1021] (0/2) Epoch 36, batch 2050, loss[loss=0.1654, simple_loss=0.2518, pruned_loss=0.03949, over 37035.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.2579, pruned_loss=0.03994, over 7170392.60 frames. ], batch size: 99, lr: 3.04e-03, grad_scale: 32.0 2023-05-11 12:07:49,381 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0039-130165-0_sp0.9 from training. Duration: 20.661125 2023-05-11 12:07:58,263 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.ff3_skip_rate, batch_count=646560.0, ans=0.0 2023-05-11 12:08:08,387 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=646610.0, ans=0.1 2023-05-11 12:08:09,990 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=646610.0, ans=0.1 2023-05-11 12:08:12,669 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0043-15874-0_sp0.9 from training. Duration: 20.07225 2023-05-11 12:08:19,860 WARNING [train.py:1182] (0/2) Exclude cut with ID 1085-156170-0017-128270-0 from training. Duration: 21.01 2023-05-11 12:08:41,332 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.2.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 12:09:02,424 INFO [train.py:1021] (0/2) Epoch 36, batch 2100, loss[loss=0.17, simple_loss=0.2585, pruned_loss=0.04078, over 31976.00 frames. ], tot_loss[loss=0.1694, simple_loss=0.2581, pruned_loss=0.04039, over 7131549.45 frames. ], batch size: 170, lr: 3.04e-03, grad_scale: 32.0 2023-05-11 12:09:02,733 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.scale_min, batch_count=646810.0, ans=0.2 2023-05-11 12:09:26,839 WARNING [train.py:1182] (0/2) Exclude cut with ID 2195-150901-0045-59933-0 from training. Duration: 20.65 2023-05-11 12:09:34,103 WARNING [train.py:1182] (0/2) Exclude cut with ID 5796-66357-0007-116447-0 from training. Duration: 21.46 2023-05-11 12:09:47,194 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.attention_skip_rate, batch_count=646960.0, ans=0.0 2023-05-11 12:10:05,872 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.balancer2.prob, batch_count=647010.0, ans=0.125 2023-05-11 12:10:14,090 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.950e+02 3.560e+02 4.128e+02 4.689e+02 9.206e+02, threshold=8.256e+02, percent-clipped=3.0 2023-05-11 12:10:17,054 INFO [train.py:1021] (0/2) Epoch 36, batch 2150, loss[loss=0.1463, simple_loss=0.2302, pruned_loss=0.03125, over 36829.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2585, pruned_loss=0.04088, over 7099337.34 frames. ], batch size: 89, lr: 3.03e-03, grad_scale: 16.0 2023-05-11 12:10:17,176 WARNING [train.py:1182] (0/2) Exclude cut with ID 3557-8342-0013-54691-0 from training. Duration: 0.92 2023-05-11 12:10:20,240 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.hidden_balancer.prob, batch_count=647060.0, ans=0.125 2023-05-11 12:10:25,809 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0023-13010-0_sp0.9 from training. Duration: 23.7666875 2023-05-11 12:10:33,350 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module2.balancer1.prob, batch_count=647110.0, ans=0.125 2023-05-11 12:10:53,380 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer2.prob, batch_count=647160.0, ans=0.125 2023-05-11 12:11:03,140 WARNING [train.py:1182] (0/2) Exclude cut with ID 8544-281189-0060-101339-0_sp0.9 from training. Duration: 20.861125 2023-05-11 12:11:13,598 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.balancer1.prob, batch_count=647210.0, ans=0.125 2023-05-11 12:11:14,675 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-65654-0031-41259-0_sp0.9 from training. Duration: 22.711125 2023-05-11 12:11:30,495 INFO [train.py:1021] (0/2) Epoch 36, batch 2200, loss[loss=0.1553, simple_loss=0.2383, pruned_loss=0.03619, over 37062.00 frames. ], tot_loss[loss=0.1692, simple_loss=0.2573, pruned_loss=0.04053, over 7143337.57 frames. ], batch size: 88, lr: 3.03e-03, grad_scale: 16.0 2023-05-11 12:11:36,937 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.6267, 4.9544, 5.1127, 4.8418], device='cuda:0') 2023-05-11 12:11:39,977 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.balancer1.min_positive, batch_count=647310.0, ans=0.025 2023-05-11 12:11:58,191 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0043-132310-0_sp1.1 from training. Duration: 22.986375 2023-05-11 12:11:58,510 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.balancer2.prob, batch_count=647360.0, ans=0.125 2023-05-11 12:12:11,411 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass_mid.scale_min, batch_count=647410.0, ans=0.2 2023-05-11 12:12:14,018 WARNING [train.py:1182] (0/2) Exclude cut with ID 8040-260924-0003-80960-0_sp0.9 from training. Duration: 22.07225 2023-05-11 12:12:19,847 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0045-26330-0_sp0.9 from training. Duration: 20.3055625 2023-05-11 12:12:21,356 WARNING [train.py:1182] (0/2) Exclude cut with ID 6356-271890-0060-94317-0_sp0.9 from training. Duration: 20.72225 2023-05-11 12:12:27,962 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=2.59 vs. limit=6.0 2023-05-11 12:12:28,522 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=647510.0, ans=0.1 2023-05-11 12:12:34,610 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer2.prob, batch_count=647510.0, ans=0.125 2023-05-11 12:12:40,730 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0017-41203-0_sp1.1 from training. Duration: 22.4818125 2023-05-11 12:12:42,524 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.928e+02 3.505e+02 3.820e+02 4.410e+02 7.276e+02, threshold=7.640e+02, percent-clipped=0.0 2023-05-11 12:12:44,221 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.attention_skip_rate, batch_count=647560.0, ans=0.0 2023-05-11 12:12:45,427 INFO [train.py:1021] (0/2) Epoch 36, batch 2250, loss[loss=0.215, simple_loss=0.2904, pruned_loss=0.06986, over 24283.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.257, pruned_loss=0.04084, over 7128733.77 frames. ], batch size: 234, lr: 3.03e-03, grad_scale: 16.0 2023-05-11 12:13:04,729 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=647610.0, ans=0.125 2023-05-11 12:13:07,303 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0040-44509-0_sp0.9 from training. Duration: 25.0944375 2023-05-11 12:13:11,632 WARNING [train.py:1182] (0/2) Exclude cut with ID 6533-399-0047-104881-0 from training. Duration: 21.515 2023-05-11 12:13:18,888 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0009-15840-0_sp0.9 from training. Duration: 27.02225 2023-05-11 12:13:23,200 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0010-62480-0_sp0.9 from training. Duration: 22.22225 2023-05-11 12:13:24,869 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.const_attention_rate, batch_count=647660.0, ans=0.025 2023-05-11 12:13:30,371 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0085-44554-0_sp0.9 from training. Duration: 20.85 2023-05-11 12:13:47,732 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.0.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 12:13:47,777 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=647760.0, ans=0.1 2023-05-11 12:13:59,106 INFO [train.py:1021] (0/2) Epoch 36, batch 2300, loss[loss=0.2053, simple_loss=0.2807, pruned_loss=0.06499, over 24622.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2566, pruned_loss=0.04083, over 7129160.38 frames. ], batch size: 233, lr: 3.03e-03, grad_scale: 16.0 2023-05-11 12:14:00,857 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=647810.0, ans=0.125 2023-05-11 12:14:03,552 WARNING [train.py:1182] (0/2) Exclude cut with ID 4295-39940-0007-92567-0 from training. Duration: 21.54 2023-05-11 12:14:05,168 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=647810.0, ans=0.1 2023-05-11 12:14:07,858 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0040-44509-0_sp1.1 from training. Duration: 20.5318125 2023-05-11 12:14:16,701 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0012-134311-0_sp0.9 from training. Duration: 21.9333125 2023-05-11 12:15:05,643 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0025-130151-0_sp0.9 from training. Duration: 21.7944375 2023-05-11 12:15:09,997 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.874e+02 3.461e+02 3.850e+02 4.312e+02 5.753e+02, threshold=7.699e+02, percent-clipped=0.0 2023-05-11 12:15:13,415 INFO [train.py:1021] (0/2) Epoch 36, batch 2350, loss[loss=0.1848, simple_loss=0.2746, pruned_loss=0.04751, over 34974.00 frames. ], tot_loss[loss=0.169, simple_loss=0.2564, pruned_loss=0.04082, over 7131821.50 frames. ], batch size: 145, lr: 3.03e-03, grad_scale: 16.0 2023-05-11 12:15:17,161 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.0589, 4.4421, 2.8175, 3.2706], device='cuda:0') 2023-05-11 12:15:19,684 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0002-12989-0_sp0.9 from training. Duration: 22.4666875 2023-05-11 12:15:25,521 WARNING [train.py:1182] (0/2) Exclude cut with ID 6121-9014-0076-24124-0 from training. Duration: 21.635 2023-05-11 12:15:30,074 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=648110.0, ans=0.125 2023-05-11 12:15:31,282 WARNING [train.py:1182] (0/2) Exclude cut with ID 6121-9014-0076-24124-0_sp0.9 from training. Duration: 24.038875 2023-05-11 12:15:33,497 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=4.71 vs. limit=12.0 2023-05-11 12:15:36,214 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=2.97 vs. limit=10.0 2023-05-11 12:15:47,336 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.0.self_attn_weights, loss-sum=2.502e-03 2023-05-11 12:15:48,832 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.balancer2.prob, batch_count=648160.0, ans=0.125 2023-05-11 12:16:14,347 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0009-134308-0_sp1.1 from training. Duration: 21.786375 2023-05-11 12:16:27,128 INFO [train.py:1021] (0/2) Epoch 36, batch 2400, loss[loss=0.1765, simple_loss=0.2662, pruned_loss=0.04338, over 37092.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.2574, pruned_loss=0.04101, over 7139551.44 frames. ], batch size: 103, lr: 3.03e-03, grad_scale: 32.0 2023-05-11 12:16:27,222 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0002-12989-0 from training. Duration: 20.22 2023-05-11 12:16:32,301 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=6.38 vs. limit=15.0 2023-05-11 12:16:33,140 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.scale_min, batch_count=648310.0, ans=0.2 2023-05-11 12:17:07,603 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.bypass.skip_rate, batch_count=648410.0, ans=0.07 2023-05-11 12:17:37,916 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.919e+02 3.499e+02 3.761e+02 4.367e+02 6.015e+02, threshold=7.523e+02, percent-clipped=0.0 2023-05-11 12:17:40,841 INFO [train.py:1021] (0/2) Epoch 36, batch 2450, loss[loss=0.1743, simple_loss=0.2665, pruned_loss=0.04102, over 37037.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2567, pruned_loss=0.04073, over 7159028.36 frames. ], batch size: 110, lr: 3.03e-03, grad_scale: 32.0 2023-05-11 12:18:04,150 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.ff3_skip_rate, batch_count=648610.0, ans=0.0 2023-05-11 12:18:05,560 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.ff3_skip_rate, batch_count=648610.0, ans=0.0 2023-05-11 12:18:08,800 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.3652, 3.5933, 3.9066, 3.6341], device='cuda:0') 2023-05-11 12:18:18,885 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.2292, 5.0257, 4.4654, 4.7778], device='cuda:0') 2023-05-11 12:18:28,963 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0043-132310-0 from training. Duration: 25.285 2023-05-11 12:18:29,209 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.3407, 4.6869, 4.8292, 4.5959], device='cuda:0') 2023-05-11 12:18:48,884 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass.scale_min, batch_count=648760.0, ans=0.2 2023-05-11 12:18:55,883 INFO [train.py:1021] (0/2) Epoch 36, batch 2500, loss[loss=0.1695, simple_loss=0.2566, pruned_loss=0.04117, over 36960.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.2566, pruned_loss=0.0406, over 7153500.10 frames. ], batch size: 95, lr: 3.03e-03, grad_scale: 32.0 2023-05-11 12:19:31,350 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.9443, 5.1363, 5.2444, 5.8168], device='cuda:0') 2023-05-11 12:19:35,495 WARNING [train.py:1182] (0/2) Exclude cut with ID 811-130148-0001-63453-0_sp0.9 from training. Duration: 20.861125 2023-05-11 12:19:42,090 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.const_attention_rate, batch_count=648960.0, ans=0.025 2023-05-11 12:19:54,150 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.whiten, num_groups=1, num_channels=192, metric=3.95 vs. limit=12.0 2023-05-11 12:19:57,426 WARNING [train.py:1182] (0/2) Exclude cut with ID 6010-56788-0055-90261-0 from training. Duration: 20.88 2023-05-11 12:20:06,182 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.844e+02 3.501e+02 3.857e+02 4.439e+02 7.117e+02, threshold=7.714e+02, percent-clipped=0.0 2023-05-11 12:20:09,059 INFO [train.py:1021] (0/2) Epoch 36, batch 2550, loss[loss=0.2076, simple_loss=0.2823, pruned_loss=0.06642, over 24926.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.2565, pruned_loss=0.0406, over 7146623.09 frames. ], batch size: 233, lr: 3.03e-03, grad_scale: 32.0 2023-05-11 12:20:10,811 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer2.min_abs, batch_count=649060.0, ans=0.5 2023-05-11 12:20:25,887 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_skip_rate, batch_count=649110.0, ans=0.0 2023-05-11 12:20:30,652 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0045-15876-0_sp0.9 from training. Duration: 23.4166875 2023-05-11 12:20:42,371 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module1.balancer2.prob, batch_count=649160.0, ans=0.125 2023-05-11 12:21:15,239 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.4678, 4.6869, 2.3086, 2.5814], device='cuda:0') 2023-05-11 12:21:23,495 INFO [train.py:1021] (0/2) Epoch 36, batch 2600, loss[loss=0.2032, simple_loss=0.2819, pruned_loss=0.06228, over 24684.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.2561, pruned_loss=0.0405, over 7167648.64 frames. ], batch size: 233, lr: 3.03e-03, grad_scale: 32.0 2023-05-11 12:21:50,225 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0006-134305-0 from training. Duration: 21.24 2023-05-11 12:21:50,237 WARNING [train.py:1182] (0/2) Exclude cut with ID 6533-399-0047-104881-0_sp0.9 from training. Duration: 23.9055625 2023-05-11 12:21:51,942 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=649410.0, ans=0.0 2023-05-11 12:22:01,423 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer1.prob, batch_count=649410.0, ans=0.125 2023-05-11 12:22:10,067 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.balancer1.prob, batch_count=649460.0, ans=0.125 2023-05-11 12:22:24,900 WARNING [train.py:1182] (0/2) Exclude cut with ID 6758-72288-0033-108368-0_sp0.9 from training. Duration: 25.988875 2023-05-11 12:22:33,587 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0001-134300-0_sp0.9 from training. Duration: 20.67225 2023-05-11 12:22:34,961 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.945e+02 3.525e+02 3.964e+02 4.603e+02 9.530e+02, threshold=7.928e+02, percent-clipped=4.0 2023-05-11 12:22:37,854 INFO [train.py:1021] (0/2) Epoch 36, batch 2650, loss[loss=0.1769, simple_loss=0.2724, pruned_loss=0.04071, over 37094.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.256, pruned_loss=0.04035, over 7164604.78 frames. ], batch size: 110, lr: 3.03e-03, grad_scale: 32.0 2023-05-11 12:23:21,212 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0038-41224-0 from training. Duration: 20.34 2023-05-11 12:23:31,726 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_skip_rate, batch_count=649710.0, ans=0.0 2023-05-11 12:23:37,893 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=3.32 vs. limit=12.0 2023-05-11 12:23:52,301 INFO [train.py:1021] (0/2) Epoch 36, batch 2700, loss[loss=0.1619, simple_loss=0.2473, pruned_loss=0.03821, over 36953.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.256, pruned_loss=0.04035, over 7132648.67 frames. ], batch size: 91, lr: 3.03e-03, grad_scale: 32.0 2023-05-11 12:24:03,364 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=649810.0, ans=0.125 2023-05-11 12:24:10,686 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff3_skip_rate, batch_count=649860.0, ans=0.0 2023-05-11 12:24:12,042 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=649860.0, ans=0.1 2023-05-11 12:24:35,558 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0026-15857-0_sp0.9 from training. Duration: 25.061125 2023-05-11 12:24:44,359 WARNING [train.py:1182] (0/2) Exclude cut with ID 3033-130750-0096-55598-0 from training. Duration: 0.83 2023-05-11 12:24:57,149 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.out_combiner.scale_min, batch_count=650010.0, ans=0.2 2023-05-11 12:25:03,805 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.897e+02 3.563e+02 4.191e+02 5.100e+02 7.795e+02, threshold=8.382e+02, percent-clipped=0.0 2023-05-11 12:25:06,673 INFO [train.py:1021] (0/2) Epoch 36, batch 2750, loss[loss=0.1784, simple_loss=0.2721, pruned_loss=0.04231, over 36739.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2562, pruned_loss=0.04055, over 7114606.30 frames. ], batch size: 118, lr: 3.03e-03, grad_scale: 32.0 2023-05-11 12:25:09,666 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0017-41203-0 from training. Duration: 24.73 2023-05-11 12:25:14,365 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.7008, 3.8054, 4.1716, 3.7615], device='cuda:0') 2023-05-11 12:25:14,367 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=650060.0, ans=0.125 2023-05-11 12:25:21,875 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0009-134308-0 from training. Duration: 23.965 2023-05-11 12:25:23,628 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.4726, 4.6341, 2.3239, 2.5601], device='cuda:0') 2023-05-11 12:25:30,391 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0030-146996-0_sp0.9 from training. Duration: 22.088875 2023-05-11 12:25:32,101 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=650110.0, ans=0.1 2023-05-11 12:25:32,102 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass_mid.scale_min, batch_count=650110.0, ans=0.2 2023-05-11 12:25:44,397 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer1.max_abs, batch_count=650160.0, ans=10.0 2023-05-11 12:25:49,857 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0006-134305-0_sp0.9 from training. Duration: 23.6 2023-05-11 12:26:19,570 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=650310.0, ans=0.1 2023-05-11 12:26:20,568 INFO [train.py:1021] (0/2) Epoch 36, batch 2800, loss[loss=0.1641, simple_loss=0.2553, pruned_loss=0.03645, over 34680.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2561, pruned_loss=0.04064, over 7092972.01 frames. ], batch size: 145, lr: 3.03e-03, grad_scale: 32.0 2023-05-11 12:26:23,792 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=650310.0, ans=0.1 2023-05-11 12:26:29,330 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.hidden_balancer.prob, batch_count=650310.0, ans=0.125 2023-05-11 12:26:32,886 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=650310.0, ans=0.125 2023-05-11 12:26:48,734 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer2.prob, batch_count=650410.0, ans=0.125 2023-05-11 12:26:55,961 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=650410.0, ans=0.1 2023-05-11 12:27:29,034 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=650510.0, ans=0.0 2023-05-11 12:27:31,500 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.964e+02 3.462e+02 3.857e+02 4.453e+02 6.146e+02, threshold=7.714e+02, percent-clipped=0.0 2023-05-11 12:27:33,074 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0024-13011-0 from training. Duration: 23.795 2023-05-11 12:27:34,384 INFO [train.py:1021] (0/2) Epoch 36, batch 2850, loss[loss=0.1508, simple_loss=0.2317, pruned_loss=0.03499, over 37085.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2552, pruned_loss=0.04029, over 7130118.92 frames. ], batch size: 88, lr: 3.03e-03, grad_scale: 32.0 2023-05-11 12:27:41,905 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module2.balancer2.prob, batch_count=650560.0, ans=0.125 2023-05-11 12:27:48,846 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0030-130156-0_sp1.1 from training. Duration: 21.5409375 2023-05-11 12:27:51,613 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0019-146985-0_sp0.9 from training. Duration: 24.97775 2023-05-11 12:28:03,689 WARNING [train.py:1182] (0/2) Exclude cut with ID 1085-156170-0017-128270-0_sp0.9 from training. Duration: 23.3444375 2023-05-11 12:28:11,312 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.2080, 4.6495, 3.3566, 3.4277], device='cuda:0') 2023-05-11 12:28:17,339 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.1535, 4.4405, 4.7105, 4.7831], device='cuda:0') 2023-05-11 12:28:32,225 WARNING [train.py:1182] (0/2) Exclude cut with ID 6010-56788-0055-90261-0_sp0.9 from training. Duration: 23.2 2023-05-11 12:28:39,684 WARNING [train.py:1182] (0/2) Exclude cut with ID 5653-46179-0060-117930-0_sp0.9 from training. Duration: 21.17225 2023-05-11 12:28:49,004 INFO [train.py:1021] (0/2) Epoch 36, batch 2900, loss[loss=0.1451, simple_loss=0.2319, pruned_loss=0.02914, over 37068.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2552, pruned_loss=0.04018, over 7159470.75 frames. ], batch size: 94, lr: 3.03e-03, grad_scale: 32.0 2023-05-11 12:28:50,915 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 12:28:59,290 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0008-134307-0_sp0.9 from training. Duration: 24.6555625 2023-05-11 12:29:07,888 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer1.prob, batch_count=650860.0, ans=0.125 2023-05-11 12:29:17,365 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=1.78 vs. limit=6.0 2023-05-11 12:29:22,752 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.hidden_balancer.prob, batch_count=650910.0, ans=0.125 2023-05-11 12:29:24,373 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.const_attention_rate, batch_count=650910.0, ans=0.025 2023-05-11 12:29:27,308 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.6606, 3.7298, 4.1567, 3.6665], device='cuda:0') 2023-05-11 12:29:34,295 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module2.balancer2.prob, batch_count=650960.0, ans=0.125 2023-05-11 12:29:37,959 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([3.2520, 4.1333, 4.7095, 4.8881], device='cuda:0') 2023-05-11 12:29:49,474 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.0.self_attn_weights, attn_weights_entropy = tensor([6.3725, 5.6714, 5.5219, 6.1219], device='cuda:0') 2023-05-11 12:29:55,196 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-65654-0031-41259-0 from training. Duration: 20.44 2023-05-11 12:29:59,570 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.734e+02 3.392e+02 3.760e+02 4.283e+02 8.001e+02, threshold=7.520e+02, percent-clipped=1.0 2023-05-11 12:29:59,975 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.attention_skip_rate, batch_count=651010.0, ans=0.0 2023-05-11 12:30:02,494 INFO [train.py:1021] (0/2) Epoch 36, batch 2950, loss[loss=0.1817, simple_loss=0.268, pruned_loss=0.04764, over 35972.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.256, pruned_loss=0.04044, over 7148958.77 frames. ], batch size: 133, lr: 3.03e-03, grad_scale: 32.0 2023-05-11 12:30:10,461 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0018-132285-0_sp0.9 from training. Duration: 23.45 2023-05-11 12:30:13,825 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=2.06 vs. limit=6.0 2023-05-11 12:30:25,284 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module2.balancer1.prob, batch_count=651110.0, ans=0.125 2023-05-11 12:30:31,515 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=651160.0, ans=0.1 2023-05-11 12:30:32,969 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module2.balancer1.min_positive, batch_count=651160.0, ans=0.025 2023-05-11 12:30:40,025 WARNING [train.py:1182] (0/2) Exclude cut with ID 6945-60535-0076-12784-0_sp0.9 from training. Duration: 20.52225 2023-05-11 12:30:47,080 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0008-134307-0 from training. Duration: 22.19 2023-05-11 12:30:59,079 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0014-15845-0_sp1.1 from training. Duration: 25.3818125 2023-05-11 12:31:05,476 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.3145, 3.0752, 4.6578, 3.3410], device='cuda:0') 2023-05-11 12:31:15,419 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer1.prob, batch_count=651310.0, ans=0.125 2023-05-11 12:31:16,513 INFO [train.py:1021] (0/2) Epoch 36, batch 3000, loss[loss=0.1531, simple_loss=0.2318, pruned_loss=0.03721, over 36995.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2563, pruned_loss=0.04054, over 7143098.42 frames. ], batch size: 91, lr: 3.03e-03, grad_scale: 32.0 2023-05-11 12:31:16,514 INFO [train.py:1048] (0/2) Computing validation loss 2023-05-11 12:31:29,423 INFO [train.py:1057] (0/2) Epoch 36, validation: loss=0.1514, simple_loss=0.252, pruned_loss=0.02536, over 944034.00 frames. 2023-05-11 12:31:29,423 INFO [train.py:1058] (0/2) Maximum memory allocated so far is 18592MB 2023-05-11 12:31:29,496 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0043-132310-0_sp0.9 from training. Duration: 28.0944375 2023-05-11 12:31:35,323 WARNING [train.py:1182] (0/2) Exclude cut with ID 2195-150901-0045-59933-0_sp0.9 from training. Duration: 22.9444375 2023-05-11 12:31:35,630 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.6483, 5.4769, 4.8470, 5.2634], device='cuda:0') 2023-05-11 12:31:43,280 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0024-13011-0_sp1.1 from training. Duration: 21.6318125 2023-05-11 12:32:02,659 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0030-130156-0 from training. Duration: 23.695 2023-05-11 12:32:28,788 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0094-26379-0 from training. Duration: 23.955 2023-05-11 12:32:36,879 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.balancer_ff2.min_abs, batch_count=651510.0, ans=0.1 2023-05-11 12:32:40,982 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.885e+02 3.400e+02 3.782e+02 4.532e+02 8.797e+02, threshold=7.564e+02, percent-clipped=1.0 2023-05-11 12:32:43,986 INFO [train.py:1021] (0/2) Epoch 36, batch 3050, loss[loss=0.1751, simple_loss=0.2631, pruned_loss=0.0435, over 31924.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2565, pruned_loss=0.04047, over 7136862.86 frames. ], batch size: 170, lr: 3.02e-03, grad_scale: 32.0 2023-05-11 12:32:47,137 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.balancer.max_positive, batch_count=651560.0, ans=0.95 2023-05-11 12:33:01,522 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0024-13011-0_sp0.9 from training. Duration: 26.438875 2023-05-11 12:33:46,332 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0021-26306-0_sp0.9 from training. Duration: 21.2444375 2023-05-11 12:33:46,359 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0014-15845-0_sp0.9 from training. Duration: 31.02225 2023-05-11 12:33:47,415 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.self_attn1.whiten, num_groups=1, num_channels=192, metric=11.11 vs. limit=22.5 2023-05-11 12:33:56,402 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0017-62487-0 from training. Duration: 22.395 2023-05-11 12:33:57,761 INFO [train.py:1021] (0/2) Epoch 36, batch 3100, loss[loss=0.1604, simple_loss=0.2457, pruned_loss=0.03755, over 36844.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.2566, pruned_loss=0.04064, over 7113681.96 frames. ], batch size: 96, lr: 3.02e-03, grad_scale: 32.0 2023-05-11 12:34:12,331 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0045-15876-0 from training. Duration: 21.075 2023-05-11 12:34:18,431 WARNING [train.py:1182] (0/2) Exclude cut with ID 6482-98857-0025-147532-0_sp0.9 from training. Duration: 20.0055625 2023-05-11 12:34:18,441 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0037-132304-0_sp0.9 from training. Duration: 22.05 2023-05-11 12:34:18,457 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0003-134302-0 from training. Duration: 26.8349375 2023-05-11 12:34:18,604 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.attention_skip_rate, batch_count=651860.0, ans=0.0 2023-05-11 12:34:22,647 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0009-15840-0_sp1.1 from training. Duration: 22.1090625 2023-05-11 12:34:27,312 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass_mid.scale_min, batch_count=651910.0, ans=0.2 2023-05-11 12:34:29,865 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0094-26379-0_sp0.9 from training. Duration: 26.6166875 2023-05-11 12:34:49,388 WARNING [train.py:1182] (0/2) Exclude cut with ID 2046-178027-0000-53705-0_sp0.9 from training. Duration: 20.3055625 2023-05-11 12:35:01,585 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=5.36 vs. limit=15.0 2023-05-11 12:35:08,455 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 3.031e+02 3.618e+02 4.003e+02 4.768e+02 7.911e+02, threshold=8.007e+02, percent-clipped=1.0 2023-05-11 12:35:10,065 WARNING [train.py:1182] (0/2) Exclude cut with ID 7205-50138-0008-5373-0_sp0.9 from training. Duration: 20.7 2023-05-11 12:35:11,472 INFO [train.py:1021] (0/2) Epoch 36, batch 3150, loss[loss=0.1736, simple_loss=0.2633, pruned_loss=0.04197, over 34574.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2562, pruned_loss=0.04065, over 7092436.04 frames. ], batch size: 145, lr: 3.02e-03, grad_scale: 32.0 2023-05-11 12:35:31,234 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([3.0201, 4.1984, 4.6456, 4.9189], device='cuda:0') 2023-05-11 12:35:48,064 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=2.68 vs. limit=6.0 2023-05-11 12:35:52,751 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0019-146985-0 from training. Duration: 22.48 2023-05-11 12:36:10,594 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0003-134302-0_sp0.9 from training. Duration: 29.816625 2023-05-11 12:36:10,854 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=652260.0, ans=0.0 2023-05-11 12:36:22,945 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer1.prob, batch_count=652260.0, ans=0.125 2023-05-11 12:36:25,409 INFO [train.py:1021] (0/2) Epoch 36, batch 3200, loss[loss=0.1609, simple_loss=0.2477, pruned_loss=0.03701, over 37160.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2561, pruned_loss=0.04053, over 7102749.75 frames. ], batch size: 102, lr: 3.02e-03, grad_scale: 32.0 2023-05-11 12:36:28,281 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0005-134304-0_sp1.1 from training. Duration: 22.7590625 2023-05-11 12:36:31,560 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.nonlin_attention.whiten1.whitening_limit, batch_count=652310.0, ans=10.0 2023-05-11 12:36:33,786 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0026-15857-0 from training. Duration: 22.555 2023-05-11 12:36:49,623 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=3.50 vs. limit=12.0 2023-05-11 12:36:54,951 WARNING [train.py:1182] (0/2) Exclude cut with ID 1250-135782-0005-25975-0_sp0.9 from training. Duration: 21.688875 2023-05-11 12:37:08,863 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer2.prob, batch_count=652460.0, ans=0.125 2023-05-11 12:37:11,957 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.out_combiner.scale_min, batch_count=652460.0, ans=0.2 2023-05-11 12:37:14,793 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer2.prob, batch_count=652460.0, ans=0.125 2023-05-11 12:37:26,370 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=652510.0, ans=0.125 2023-05-11 12:37:31,702 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0038-41224-0_sp0.9 from training. Duration: 22.6 2023-05-11 12:37:35,030 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.attention_skip_rate, batch_count=652510.0, ans=0.0 2023-05-11 12:37:36,107 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.993e+02 3.664e+02 4.226e+02 5.405e+02 9.073e+02, threshold=8.453e+02, percent-clipped=4.0 2023-05-11 12:37:39,508 INFO [train.py:1021] (0/2) Epoch 36, batch 3250, loss[loss=0.1841, simple_loss=0.2724, pruned_loss=0.04787, over 34941.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.256, pruned_loss=0.04053, over 7102479.34 frames. ], batch size: 145, lr: 3.02e-03, grad_scale: 32.0 2023-05-11 12:37:39,921 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer2.min_abs, batch_count=652560.0, ans=0.5 2023-05-11 12:38:08,932 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0009-15840-0 from training. Duration: 24.32 2023-05-11 12:38:13,808 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=3.40 vs. limit=10.0 2023-05-11 12:38:16,531 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=652660.0, ans=0.1 2023-05-11 12:38:28,151 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.1653, 4.0218, 3.7753, 4.0142, 3.4507, 3.1227, 3.5284, 3.1267], device='cuda:0') 2023-05-11 12:38:39,335 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=256, metric=4.49 vs. limit=15.0 2023-05-11 12:38:46,432 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=8, num_channels=256, metric=3.70 vs. limit=6.0 2023-05-11 12:38:53,354 INFO [train.py:1021] (0/2) Epoch 36, batch 3300, loss[loss=0.1782, simple_loss=0.2661, pruned_loss=0.04509, over 37023.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.2562, pruned_loss=0.04042, over 7125735.98 frames. ], batch size: 116, lr: 3.02e-03, grad_scale: 32.0 2023-05-11 12:39:06,639 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-276745-0093-13116-0_sp0.9 from training. Duration: 21.061125 2023-05-11 12:39:11,163 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.hidden_balancer.prob, batch_count=652860.0, ans=0.125 2023-05-11 12:39:12,657 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=652860.0, ans=0.1 2023-05-11 12:39:19,563 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0024-15855-0_sp0.9 from training. Duration: 20.32225 2023-05-11 12:39:19,719 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=652860.0, ans=0.1 2023-05-11 12:39:33,006 WARNING [train.py:1182] (0/2) Exclude cut with ID 3033-130750-0096-55598-0_sp1.1 from training. Duration: 0.7545625 2023-05-11 12:39:39,948 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=256, metric=8.55 vs. limit=22.5 2023-05-11 12:39:49,775 WARNING [train.py:1182] (0/2) Exclude cut with ID 4295-39940-0007-92567-0_sp0.9 from training. Duration: 23.9333125 2023-05-11 12:39:55,675 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module1.balancer1.prob, batch_count=653010.0, ans=0.125 2023-05-11 12:40:04,018 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.901e+02 3.445e+02 3.850e+02 4.554e+02 6.892e+02, threshold=7.700e+02, percent-clipped=0.0 2023-05-11 12:40:06,881 INFO [train.py:1021] (0/2) Epoch 36, batch 3350, loss[loss=0.1566, simple_loss=0.2472, pruned_loss=0.03301, over 36991.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2557, pruned_loss=0.04024, over 7131887.70 frames. ], batch size: 104, lr: 3.02e-03, grad_scale: 32.0 2023-05-11 12:40:20,834 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0008-134307-0_sp1.1 from training. Duration: 20.17275 2023-05-11 12:40:28,050 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0019-146985-0_sp1.1 from training. Duration: 20.436375 2023-05-11 12:40:47,952 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=653160.0, ans=0.125 2023-05-11 12:41:21,700 INFO [train.py:1021] (0/2) Epoch 36, batch 3400, loss[loss=0.1517, simple_loss=0.2355, pruned_loss=0.03393, over 37189.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2559, pruned_loss=0.04042, over 7100681.58 frames. ], batch size: 93, lr: 3.02e-03, grad_scale: 32.0 2023-05-11 12:41:50,428 WARNING [train.py:1182] (0/2) Exclude cut with ID 4234-40345-0022-142709-0_sp0.9 from training. Duration: 23.1055625 2023-05-11 12:41:51,850 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0007-12994-0_sp1.1 from training. Duration: 23.5 2023-05-11 12:42:01,444 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_skip_rate, batch_count=653410.0, ans=0.0 2023-05-11 12:42:03,960 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0009-134308-0_sp0.9 from training. Duration: 26.62775 2023-05-11 12:42:13,451 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=653460.0, ans=0.125 2023-05-11 12:42:17,492 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0018-132285-0 from training. Duration: 21.105 2023-05-11 12:42:23,383 WARNING [train.py:1182] (0/2) Exclude cut with ID 4511-76322-0006-80011-0_sp0.9 from training. Duration: 24.411125 2023-05-11 12:42:32,091 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.925e+02 3.447e+02 3.788e+02 4.358e+02 7.762e+02, threshold=7.576e+02, percent-clipped=1.0 2023-05-11 12:42:35,025 INFO [train.py:1021] (0/2) Epoch 36, batch 3450, loss[loss=0.1769, simple_loss=0.2723, pruned_loss=0.04072, over 35664.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2554, pruned_loss=0.04017, over 7132559.15 frames. ], batch size: 133, lr: 3.02e-03, grad_scale: 32.0 2023-05-11 12:42:51,344 WARNING [train.py:1182] (0/2) Exclude cut with ID 6758-72288-0033-108368-0_sp1.1 from training. Duration: 21.263625 2023-05-11 12:43:13,912 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.balancer.max_positive, batch_count=653660.0, ans=0.95 2023-05-11 12:43:22,441 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.skip_rate, batch_count=653710.0, ans=0.04949747468305833 2023-05-11 12:43:23,757 WARNING [train.py:1182] (0/2) Exclude cut with ID 4234-40345-0022-142709-0 from training. Duration: 20.795 2023-05-11 12:43:33,996 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0021-15852-0 from training. Duration: 24.76 2023-05-11 12:43:34,018 WARNING [train.py:1182] (0/2) Exclude cut with ID 3867-173237-0077-144769-0_sp0.9 from training. Duration: 22.25 2023-05-11 12:43:36,597 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=192, metric=9.11 vs. limit=15.0 2023-05-11 12:43:47,974 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([3.7570, 3.1603, 3.3636, 3.3805], device='cuda:0') 2023-05-11 12:43:48,946 INFO [train.py:1021] (0/2) Epoch 36, batch 3500, loss[loss=0.1924, simple_loss=0.2783, pruned_loss=0.05324, over 24377.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2555, pruned_loss=0.04034, over 7111532.11 frames. ], batch size: 233, lr: 3.02e-03, grad_scale: 32.0 2023-05-11 12:43:58,334 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0026-15857-0_sp1.1 from training. Duration: 20.5045625 2023-05-11 12:44:00,108 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module2.balancer1.prob, batch_count=653810.0, ans=0.125 2023-05-11 12:44:04,438 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.out_combiner.scale_min, batch_count=653860.0, ans=0.2 2023-05-11 12:44:07,239 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.balancer2.prob, batch_count=653860.0, ans=0.125 2023-05-11 12:44:12,989 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=653860.0, ans=0.125 2023-05-11 12:44:16,022 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=653860.0, ans=0.1 2023-05-11 12:44:32,366 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=192, metric=6.40 vs. limit=15.0 2023-05-11 12:44:59,466 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.977e+02 3.454e+02 3.707e+02 4.406e+02 6.191e+02, threshold=7.413e+02, percent-clipped=0.0 2023-05-11 12:45:02,429 INFO [train.py:1021] (0/2) Epoch 36, batch 3550, loss[loss=0.1521, simple_loss=0.2403, pruned_loss=0.03199, over 36858.00 frames. ], tot_loss[loss=0.1675, simple_loss=0.2548, pruned_loss=0.04005, over 7112709.84 frames. ], batch size: 96, lr: 3.02e-03, grad_scale: 32.0 2023-05-11 12:45:07,209 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=6.28 vs. limit=15.0 2023-05-11 12:45:13,959 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer2.prob, batch_count=654060.0, ans=0.125 2023-05-11 12:45:15,382 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff2_skip_rate, batch_count=654110.0, ans=0.0 2023-05-11 12:45:17,190 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=4.09 vs. limit=15.0 2023-05-11 12:45:32,222 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer2.prob, batch_count=654160.0, ans=0.125 2023-05-11 12:45:59,526 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=3.79 vs. limit=15.0 2023-05-11 12:46:13,372 INFO [train.py:1021] (0/2) Epoch 36, batch 3600, loss[loss=0.1874, simple_loss=0.2726, pruned_loss=0.05108, over 36723.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2554, pruned_loss=0.04029, over 7106058.45 frames. ], batch size: 118, lr: 3.02e-03, grad_scale: 32.0 2023-05-11 12:46:39,362 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=654360.0, ans=0.125 2023-05-11 12:47:02,967 INFO [checkpoint.py:75] (0/2) Saving checkpoint to pruned_transducer_stateless7/exp1119-smaller-md1500/epoch-36.pt 2023-05-11 12:47:16,954 WARNING [train.py:1182] (0/2) Exclude cut with ID 7859-102521-0017-7548-0_sp1.1 from training. Duration: 22.2954375 2023-05-11 12:47:21,727 INFO [train.py:1021] (0/2) Epoch 37, batch 0, loss[loss=0.1744, simple_loss=0.2709, pruned_loss=0.03891, over 37078.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2709, pruned_loss=0.03891, over 37078.00 frames. ], batch size: 110, lr: 2.98e-03, grad_scale: 32.0 2023-05-11 12:47:21,728 INFO [train.py:1048] (0/2) Computing validation loss 2023-05-11 12:47:34,578 INFO [train.py:1057] (0/2) Epoch 37, validation: loss=0.1517, simple_loss=0.2524, pruned_loss=0.02548, over 944034.00 frames. 2023-05-11 12:47:34,579 INFO [train.py:1058] (0/2) Maximum memory allocated so far is 18592MB 2023-05-11 12:47:36,239 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.bypass_mid.scale_min, batch_count=654490.0, ans=0.2 2023-05-11 12:47:36,274 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.ff2_skip_rate, batch_count=654490.0, ans=0.0 2023-05-11 12:47:39,242 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=654490.0, ans=0.125 2023-05-11 12:47:39,280 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=654490.0, ans=0.125 2023-05-11 12:47:51,620 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.904e+02 3.564e+02 4.179e+02 4.889e+02 8.450e+02, threshold=8.359e+02, percent-clipped=2.0 2023-05-11 12:48:02,654 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_skip_rate, batch_count=654590.0, ans=0.0 2023-05-11 12:48:29,003 WARNING [train.py:1182] (0/2) Exclude cut with ID 298-126791-0067-24026-0_sp0.9 from training. Duration: 21.438875 2023-05-11 12:48:33,394 WARNING [train.py:1182] (0/2) Exclude cut with ID 5652-39938-0025-23684-0_sp0.9 from training. Duration: 22.2055625 2023-05-11 12:48:47,909 INFO [train.py:1021] (0/2) Epoch 37, batch 50, loss[loss=0.189, simple_loss=0.283, pruned_loss=0.04748, over 36746.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2596, pruned_loss=0.03679, over 1641080.55 frames. ], batch size: 118, lr: 2.98e-03, grad_scale: 32.0 2023-05-11 12:48:58,870 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=654740.0, ans=0.1 2023-05-11 12:49:08,209 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=5.03 vs. limit=15.0 2023-05-11 12:49:29,520 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_skip_rate, batch_count=654840.0, ans=0.0 2023-05-11 12:49:36,915 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module1.balancer1.prob, batch_count=654890.0, ans=0.125 2023-05-11 12:49:44,186 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([6.0748, 5.2865, 5.4002, 5.9189], device='cuda:0') 2023-05-11 12:50:01,910 INFO [train.py:1021] (0/2) Epoch 37, batch 100, loss[loss=0.1826, simple_loss=0.2677, pruned_loss=0.0487, over 23987.00 frames. ], tot_loss[loss=0.1648, simple_loss=0.2582, pruned_loss=0.03565, over 2876553.01 frames. ], batch size: 234, lr: 2.97e-03, grad_scale: 32.0 2023-05-11 12:50:11,478 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.balancer2.prob, batch_count=654990.0, ans=0.125 2023-05-11 12:50:19,793 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.447e+02 2.859e+02 3.171e+02 3.608e+02 5.536e+02, threshold=6.342e+02, percent-clipped=0.0 2023-05-11 12:50:20,113 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer1.prob, batch_count=655040.0, ans=0.125 2023-05-11 12:51:15,568 INFO [train.py:1021] (0/2) Epoch 37, batch 150, loss[loss=0.1641, simple_loss=0.2625, pruned_loss=0.03282, over 36829.00 frames. ], tot_loss[loss=0.1626, simple_loss=0.2553, pruned_loss=0.03491, over 3837091.72 frames. ], batch size: 111, lr: 2.97e-03, grad_scale: 32.0 2023-05-11 12:51:26,185 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=655240.0, ans=0.1 2023-05-11 12:51:37,995 WARNING [train.py:1182] (0/2) Exclude cut with ID 7859-102521-0017-7548-0 from training. Duration: 24.525 2023-05-11 12:51:54,934 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer1.prob, batch_count=655340.0, ans=0.125 2023-05-11 12:51:59,577 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=3.50 vs. limit=10.0 2023-05-11 12:52:14,993 WARNING [train.py:1182] (0/2) Exclude cut with ID 3699-47246-0007-3408-0_sp0.9 from training. Duration: 20.26675 2023-05-11 12:52:29,499 INFO [train.py:1021] (0/2) Epoch 37, batch 200, loss[loss=0.1647, simple_loss=0.2617, pruned_loss=0.03385, over 35925.00 frames. ], tot_loss[loss=0.1615, simple_loss=0.2542, pruned_loss=0.03441, over 4595044.75 frames. ], batch size: 133, lr: 2.97e-03, grad_scale: 16.0 2023-05-11 12:52:29,562 WARNING [train.py:1182] (0/2) Exclude cut with ID 7859-102521-0017-7548-0_sp0.9 from training. Duration: 27.25 2023-05-11 12:52:46,648 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=655540.0, ans=0.125 2023-05-11 12:52:49,179 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.373e+02 2.870e+02 3.399e+02 4.025e+02 8.119e+02, threshold=6.798e+02, percent-clipped=3.0 2023-05-11 12:52:59,504 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module1.balancer2.min_abs, batch_count=655590.0, ans=0.5 2023-05-11 12:53:07,854 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff3_skip_rate, batch_count=655590.0, ans=0.0 2023-05-11 12:53:22,879 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.skip_rate, batch_count=655640.0, ans=0.09899494936611666 2023-05-11 12:53:26,463 INFO [scaling.py:969] (0/2) Whitening: name=encoder_embed.convnext.out_whiten, num_groups=1, num_channels=128, metric=4.64 vs. limit=5.0 2023-05-11 12:53:30,046 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward3.hidden_balancer.prob, batch_count=655690.0, ans=0.125 2023-05-11 12:53:30,130 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module2.balancer1.max_abs, batch_count=655690.0, ans=10.0 2023-05-11 12:53:32,016 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.whiten, num_groups=1, num_channels=256, metric=3.20 vs. limit=12.0 2023-05-11 12:53:37,841 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([6.2216, 5.4199, 5.5281, 6.0862], device='cuda:0') 2023-05-11 12:53:39,131 WARNING [train.py:1182] (0/2) Exclude cut with ID 6426-64292-0017-15984-0 from training. Duration: 21.68 2023-05-11 12:53:42,306 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.7977, 5.0540, 5.2619, 4.9515], device='cuda:0') 2023-05-11 12:53:43,427 INFO [train.py:1021] (0/2) Epoch 37, batch 250, loss[loss=0.1467, simple_loss=0.229, pruned_loss=0.03216, over 35381.00 frames. ], tot_loss[loss=0.1613, simple_loss=0.254, pruned_loss=0.03434, over 5161326.04 frames. ], batch size: 78, lr: 2.97e-03, grad_scale: 16.0 2023-05-11 12:53:52,409 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0007-59342-0 from training. Duration: 21.6300625 2023-05-11 12:53:52,720 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.4938, 3.4155, 3.1487, 4.0829, 2.6540, 3.4782, 4.1213, 3.5106], device='cuda:0') 2023-05-11 12:54:17,451 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0007-59342-0_sp0.9 from training. Duration: 24.033375 2023-05-11 12:54:48,760 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff2_skip_rate, batch_count=655940.0, ans=0.0 2023-05-11 12:54:57,046 INFO [train.py:1021] (0/2) Epoch 37, batch 300, loss[loss=0.1747, simple_loss=0.2652, pruned_loss=0.04213, over 37029.00 frames. ], tot_loss[loss=0.1608, simple_loss=0.2536, pruned_loss=0.03403, over 5634543.33 frames. ], batch size: 116, lr: 2.97e-03, grad_scale: 16.0 2023-05-11 12:55:16,660 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.435e+02 2.865e+02 3.163e+02 3.584e+02 6.155e+02, threshold=6.325e+02, percent-clipped=0.0 2023-05-11 12:55:18,161 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0009-59344-0 from training. Duration: 22.905 2023-05-11 12:55:19,632 WARNING [train.py:1182] (0/2) Exclude cut with ID 5622-44585-0006-90525-0_sp1.1 from training. Duration: 23.4318125 2023-05-11 12:55:32,106 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=656090.0, ans=0.0 2023-05-11 12:55:33,488 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module2.balancer1.prob, batch_count=656090.0, ans=0.125 2023-05-11 12:55:43,474 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.bypass.skip_rate, batch_count=656140.0, ans=0.09899494936611666 2023-05-11 12:56:00,683 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=656190.0, ans=0.0 2023-05-11 12:56:11,005 INFO [train.py:1021] (0/2) Epoch 37, batch 350, loss[loss=0.1592, simple_loss=0.26, pruned_loss=0.02921, over 32543.00 frames. ], tot_loss[loss=0.1605, simple_loss=0.2533, pruned_loss=0.03384, over 5983813.06 frames. ], batch size: 170, lr: 2.97e-03, grad_scale: 16.0 2023-05-11 12:56:27,721 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=656290.0, ans=0.125 2023-05-11 12:57:17,888 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0009-59344-0_sp1.1 from training. Duration: 20.82275 2023-05-11 12:57:19,395 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0009-59344-0_sp0.9 from training. Duration: 25.45 2023-05-11 12:57:19,575 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.attention_skip_rate, batch_count=656440.0, ans=0.0 2023-05-11 12:57:25,112 INFO [train.py:1021] (0/2) Epoch 37, batch 400, loss[loss=0.1759, simple_loss=0.271, pruned_loss=0.04035, over 36819.00 frames. ], tot_loss[loss=0.161, simple_loss=0.2539, pruned_loss=0.03409, over 6260421.99 frames. ], batch size: 113, lr: 2.97e-03, grad_scale: 32.0 2023-05-11 12:57:27,189 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=5.72 vs. limit=15.0 2023-05-11 12:57:28,117 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=656490.0, ans=0.1 2023-05-11 12:57:43,429 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.376e+02 3.098e+02 3.476e+02 3.974e+02 6.719e+02, threshold=6.952e+02, percent-clipped=2.0 2023-05-11 12:57:45,187 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer2.prob, batch_count=656540.0, ans=0.125 2023-05-11 12:57:51,623 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module2.balancer2.prob, batch_count=656540.0, ans=0.125 2023-05-11 12:57:53,571 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=3.73 vs. limit=10.0 2023-05-11 12:58:19,582 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.balancer_ff3.min_abs, batch_count=656640.0, ans=0.2 2023-05-11 12:58:20,711 WARNING [train.py:1182] (0/2) Exclude cut with ID 5622-44585-0006-90525-0 from training. Duration: 25.775 2023-05-11 12:58:29,935 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=8.14 vs. limit=22.5 2023-05-11 12:58:36,670 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.attention_skip_rate, batch_count=656740.0, ans=0.0 2023-05-11 12:58:37,779 INFO [train.py:1021] (0/2) Epoch 37, batch 450, loss[loss=0.1566, simple_loss=0.2502, pruned_loss=0.0315, over 37166.00 frames. ], tot_loss[loss=0.1616, simple_loss=0.2549, pruned_loss=0.03417, over 6491388.61 frames. ], batch size: 98, lr: 2.97e-03, grad_scale: 32.0 2023-05-11 12:58:42,149 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0071-62375-0_sp0.9 from training. Duration: 22.25 2023-05-11 12:59:09,802 WARNING [train.py:1182] (0/2) Exclude cut with ID 3972-170212-0014-23379-0 from training. Duration: 26.205 2023-05-11 12:59:26,987 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0047-9341-0_sp0.9 from training. Duration: 30.1555625 2023-05-11 12:59:31,363 WARNING [train.py:1182] (0/2) Exclude cut with ID 1265-135635-0050-6781-0_sp0.9 from training. Duration: 21.8333125 2023-05-11 12:59:34,951 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=8.49 vs. limit=15.0 2023-05-11 12:59:36,064 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module1.balancer2.prob, batch_count=656940.0, ans=0.125 2023-05-11 12:59:41,429 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0024-94914-0_sp1.1 from training. Duration: 20.6545625 2023-05-11 12:59:46,486 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.ff2_skip_rate, batch_count=656940.0, ans=0.0 2023-05-11 12:59:51,910 INFO [train.py:1021] (0/2) Epoch 37, batch 500, loss[loss=0.1569, simple_loss=0.2555, pruned_loss=0.02914, over 37079.00 frames. ], tot_loss[loss=0.1617, simple_loss=0.2552, pruned_loss=0.03409, over 6672440.40 frames. ], batch size: 110, lr: 2.97e-03, grad_scale: 16.0 2023-05-11 12:59:55,249 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=656990.0, ans=0.125 2023-05-11 13:00:12,396 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.617e+02 3.110e+02 3.749e+02 4.645e+02 6.619e+02, threshold=7.499e+02, percent-clipped=0.0 2023-05-11 13:00:17,100 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module1.balancer1.prob, batch_count=657040.0, ans=0.125 2023-05-11 13:00:24,164 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0045-39920-0_sp0.9 from training. Duration: 20.52225 2023-05-11 13:00:39,735 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module1.balancer1.prob, batch_count=657140.0, ans=0.125 2023-05-11 13:00:43,804 WARNING [train.py:1182] (0/2) Exclude cut with ID 3972-170212-0014-23379-0_sp0.9 from training. Duration: 29.1166875 2023-05-11 13:00:54,162 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=657190.0, ans=0.1 2023-05-11 13:01:05,353 INFO [train.py:1021] (0/2) Epoch 37, batch 550, loss[loss=0.1468, simple_loss=0.2388, pruned_loss=0.02745, over 36870.00 frames. ], tot_loss[loss=0.1628, simple_loss=0.2564, pruned_loss=0.03454, over 6779740.17 frames. ], batch size: 96, lr: 2.97e-03, grad_scale: 16.0 2023-05-11 13:01:37,860 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=256, metric=12.25 vs. limit=22.5 2023-05-11 13:01:44,211 WARNING [train.py:1182] (0/2) Exclude cut with ID 543-133211-0007-59831-0_sp0.9 from training. Duration: 21.388875 2023-05-11 13:02:00,613 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.nonlin_attention.whiten1.whitening_limit, batch_count=657390.0, ans=10.0 2023-05-11 13:02:17,703 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0024-94914-0 from training. Duration: 22.72 2023-05-11 13:02:19,068 INFO [train.py:1021] (0/2) Epoch 37, batch 600, loss[loss=0.1621, simple_loss=0.255, pruned_loss=0.03462, over 34923.00 frames. ], tot_loss[loss=0.1629, simple_loss=0.2564, pruned_loss=0.03468, over 6890505.61 frames. ], batch size: 145, lr: 2.97e-03, grad_scale: 16.0 2023-05-11 13:02:19,182 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0031-94921-0_sp0.9 from training. Duration: 22.7444375 2023-05-11 13:02:37,947 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=657540.0, ans=0.1 2023-05-11 13:02:40,605 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.695e+02 3.135e+02 3.598e+02 4.504e+02 7.534e+02, threshold=7.197e+02, percent-clipped=1.0 2023-05-11 13:02:59,406 WARNING [train.py:1182] (0/2) Exclude cut with ID 4133-6541-0027-40495-0_sp1.1 from training. Duration: 0.9681875 2023-05-11 13:03:03,645 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62851-0022-91297-0_sp0.9 from training. Duration: 22.3166875 2023-05-11 13:03:08,599 WARNING [train.py:1182] (0/2) Exclude cut with ID 543-133212-0015-59917-0_sp0.9 from training. Duration: 21.8166875 2023-05-11 13:03:16,555 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass.skip_rate, batch_count=657640.0, ans=0.07 2023-05-11 13:03:20,756 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass.scale_min, batch_count=657690.0, ans=0.2 2023-05-11 13:03:33,543 INFO [train.py:1021] (0/2) Epoch 37, batch 650, loss[loss=0.1577, simple_loss=0.2539, pruned_loss=0.03074, over 37036.00 frames. ], tot_loss[loss=0.1628, simple_loss=0.2562, pruned_loss=0.03469, over 6978870.80 frames. ], batch size: 99, lr: 2.97e-03, grad_scale: 16.0 2023-05-11 13:03:33,951 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 13:03:54,095 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_skip_rate, batch_count=657790.0, ans=0.0 2023-05-11 13:03:56,140 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=4.19 vs. limit=15.0 2023-05-11 13:04:07,905 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.self_attn_weights.whiten_keys.whitening_limit, batch_count=657840.0, ans=6.0 2023-05-11 13:04:10,774 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module2.balancer1.prob, batch_count=657840.0, ans=0.125 2023-05-11 13:04:14,520 INFO [scaling.py:969] (0/2) Whitening: name=encoder_embed.convnext.out_whiten, num_groups=1, num_channels=128, metric=4.79 vs. limit=5.0 2023-05-11 13:04:47,493 INFO [train.py:1021] (0/2) Epoch 37, batch 700, loss[loss=0.1483, simple_loss=0.2387, pruned_loss=0.02898, over 36985.00 frames. ], tot_loss[loss=0.1628, simple_loss=0.2561, pruned_loss=0.03472, over 7036637.68 frames. ], batch size: 91, lr: 2.97e-03, grad_scale: 16.0 2023-05-11 13:04:49,017 WARNING [train.py:1182] (0/2) Exclude cut with ID 4957-30119-0041-23990-0_sp0.9 from training. Duration: 20.22775 2023-05-11 13:04:53,343 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=657990.0, ans=0.1 2023-05-11 13:04:53,460 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.balancer2.prob, batch_count=657990.0, ans=0.125 2023-05-11 13:05:07,966 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.565e+02 3.043e+02 3.358e+02 3.995e+02 8.060e+02, threshold=6.715e+02, percent-clipped=3.0 2023-05-11 13:05:27,240 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer1.min_positive, batch_count=658090.0, ans=0.025 2023-05-11 13:05:27,308 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_skip_rate, batch_count=658090.0, ans=0.0 2023-05-11 13:05:32,720 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0047-9341-0_sp1.1 from training. Duration: 24.67275 2023-05-11 13:05:40,878 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff2_skip_rate, batch_count=658140.0, ans=0.0 2023-05-11 13:05:43,867 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass.scale_min, batch_count=658140.0, ans=0.2 2023-05-11 13:05:45,716 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=5.16 vs. limit=15.0 2023-05-11 13:06:01,376 INFO [train.py:1021] (0/2) Epoch 37, batch 750, loss[loss=0.1554, simple_loss=0.2487, pruned_loss=0.03104, over 37151.00 frames. ], tot_loss[loss=0.1625, simple_loss=0.2556, pruned_loss=0.03472, over 7081530.59 frames. ], batch size: 102, lr: 2.97e-03, grad_scale: 16.0 2023-05-11 13:06:01,755 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=658240.0, ans=0.125 2023-05-11 13:06:02,942 WARNING [train.py:1182] (0/2) Exclude cut with ID 3082-165428-0081-50734-0_sp0.9 from training. Duration: 21.8055625 2023-05-11 13:06:24,595 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=658290.0, ans=0.0 2023-05-11 13:06:40,779 WARNING [train.py:1182] (0/2) Exclude cut with ID 3340-169293-0054-76830-0_sp0.9 from training. Duration: 22.6666875 2023-05-11 13:06:48,757 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=658390.0, ans=0.125 2023-05-11 13:07:14,404 INFO [train.py:1021] (0/2) Epoch 37, batch 800, loss[loss=0.1517, simple_loss=0.2444, pruned_loss=0.02953, over 37023.00 frames. ], tot_loss[loss=0.1627, simple_loss=0.2559, pruned_loss=0.03478, over 7119369.47 frames. ], batch size: 99, lr: 2.97e-03, grad_scale: 32.0 2023-05-11 13:07:29,901 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module1.balancer1.prob, batch_count=658540.0, ans=0.125 2023-05-11 13:07:35,322 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.469e+02 3.095e+02 3.487e+02 4.291e+02 7.512e+02, threshold=6.975e+02, percent-clipped=3.0 2023-05-11 13:07:45,915 WARNING [train.py:1182] (0/2) Exclude cut with ID 2411-132532-0017-82279-0_sp1.1 from training. Duration: 0.9681875 2023-05-11 13:08:12,146 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62850-0007-91323-0 from training. Duration: 22.485 2023-05-11 13:08:21,510 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([6.1482, 5.3299, 5.4041, 6.0142], device='cuda:0') 2023-05-11 13:08:28,418 INFO [train.py:1021] (0/2) Epoch 37, batch 850, loss[loss=0.1652, simple_loss=0.2636, pruned_loss=0.03334, over 36830.00 frames. ], tot_loss[loss=0.1626, simple_loss=0.2556, pruned_loss=0.03478, over 7119898.97 frames. ], batch size: 113, lr: 2.97e-03, grad_scale: 32.0 2023-05-11 13:08:50,697 WARNING [train.py:1182] (0/2) Exclude cut with ID 3972-170212-0014-23379-0_sp1.1 from training. Duration: 23.82275 2023-05-11 13:09:02,482 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.ff3_skip_rate, batch_count=658840.0, ans=0.0 2023-05-11 13:09:03,659 WARNING [train.py:1182] (0/2) Exclude cut with ID 4860-13185-0032-76709-0 from training. Duration: 20.77 2023-05-11 13:09:11,236 WARNING [train.py:1182] (0/2) Exclude cut with ID 6426-64292-0017-15984-0_sp0.9 from training. Duration: 24.088875 2023-05-11 13:09:42,375 INFO [train.py:1021] (0/2) Epoch 37, batch 900, loss[loss=0.1426, simple_loss=0.2264, pruned_loss=0.02945, over 36993.00 frames. ], tot_loss[loss=0.1625, simple_loss=0.2555, pruned_loss=0.0347, over 7145744.24 frames. ], batch size: 91, lr: 2.97e-03, grad_scale: 32.0 2023-05-11 13:09:44,077 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62850-0007-91323-0_sp1.1 from training. Duration: 20.4409375 2023-05-11 13:09:45,760 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.3404, 3.4699, 3.7508, 3.4624], device='cuda:0') 2023-05-11 13:09:50,106 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=658990.0, ans=0.125 2023-05-11 13:09:54,612 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.whiten2.whitening_limit, batch_count=658990.0, ans=15.0 2023-05-11 13:10:00,064 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.0124, 4.1650, 4.5605, 4.6007], device='cuda:0') 2023-05-11 13:10:03,193 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.486e+02 3.187e+02 3.582e+02 4.207e+02 7.422e+02, threshold=7.164e+02, percent-clipped=2.0 2023-05-11 13:10:13,860 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass_mid.scale_min, batch_count=659090.0, ans=0.2 2023-05-11 13:10:24,495 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=659090.0, ans=0.1 2023-05-11 13:10:24,768 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=7.72 vs. limit=15.0 2023-05-11 13:10:35,368 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=3.26 vs. limit=15.0 2023-05-11 13:10:44,835 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder_embed.convnext.hidden_balancer.prob, batch_count=659190.0, ans=0.125 2023-05-11 13:10:51,312 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=10.11 vs. limit=15.0 2023-05-11 13:10:56,761 INFO [train.py:1021] (0/2) Epoch 37, batch 950, loss[loss=0.1587, simple_loss=0.253, pruned_loss=0.03219, over 37127.00 frames. ], tot_loss[loss=0.1629, simple_loss=0.256, pruned_loss=0.03488, over 7137735.11 frames. ], batch size: 98, lr: 2.97e-03, grad_scale: 32.0 2023-05-11 13:10:58,265 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0062-62366-0_sp0.9 from training. Duration: 22.511125 2023-05-11 13:10:58,290 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0031-39906-0 from training. Duration: 20.675 2023-05-11 13:11:01,427 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_skip_rate, batch_count=659240.0, ans=0.0 2023-05-11 13:11:16,284 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=659290.0, ans=0.125 2023-05-11 13:11:24,949 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer2.prob, batch_count=659340.0, ans=0.125 2023-05-11 13:11:44,462 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer2.prob, batch_count=659390.0, ans=0.125 2023-05-11 13:11:48,762 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.skip_rate, batch_count=659390.0, ans=0.04949747468305833 2023-05-11 13:12:10,916 INFO [train.py:1021] (0/2) Epoch 37, batch 1000, loss[loss=0.169, simple_loss=0.2683, pruned_loss=0.03483, over 37088.00 frames. ], tot_loss[loss=0.1633, simple_loss=0.2569, pruned_loss=0.03488, over 7177960.19 frames. ], batch size: 110, lr: 2.96e-03, grad_scale: 32.0 2023-05-11 13:12:28,944 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer1.prob, batch_count=659540.0, ans=0.125 2023-05-11 13:12:28,983 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.const_attention_rate, batch_count=659540.0, ans=0.025 2023-05-11 13:12:31,680 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.554e+02 2.959e+02 3.281e+02 3.752e+02 6.269e+02, threshold=6.562e+02, percent-clipped=0.0 2023-05-11 13:12:33,458 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer1.prob, batch_count=659540.0, ans=0.125 2023-05-11 13:12:41,190 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=659590.0, ans=0.0 2023-05-11 13:12:42,523 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62850-0007-91323-0_sp0.9 from training. Duration: 24.9833125 2023-05-11 13:13:13,144 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0047-9341-0 from training. Duration: 27.14 2023-05-11 13:13:18,012 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module1.balancer2.prob, batch_count=659690.0, ans=0.125 2023-05-11 13:13:24,329 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.conv_module2.whiten, num_groups=1, num_channels=192, metric=7.05 vs. limit=15.0 2023-05-11 13:13:24,803 INFO [train.py:1021] (0/2) Epoch 37, batch 1050, loss[loss=0.1603, simple_loss=0.2557, pruned_loss=0.03243, over 36907.00 frames. ], tot_loss[loss=0.1632, simple_loss=0.2569, pruned_loss=0.03477, over 7184858.90 frames. ], batch size: 105, lr: 2.96e-03, grad_scale: 32.0 2023-05-11 13:13:29,425 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0044-62348-0 from training. Duration: 22.44 2023-05-11 13:13:40,665 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=4.04 vs. limit=15.0 2023-05-11 13:13:40,750 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=4.98 vs. limit=15.0 2023-05-11 13:13:50,517 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=659790.0, ans=0.1 2023-05-11 13:14:12,724 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.nonlin_attention.balancer.min_positive, batch_count=659890.0, ans=0.05 2023-05-11 13:14:39,365 INFO [train.py:1021] (0/2) Epoch 37, batch 1100, loss[loss=0.181, simple_loss=0.274, pruned_loss=0.04403, over 36375.00 frames. ], tot_loss[loss=0.1628, simple_loss=0.2563, pruned_loss=0.03462, over 7186605.08 frames. ], batch size: 126, lr: 2.96e-03, grad_scale: 32.0 2023-05-11 13:14:40,901 INFO [checkpoint.py:75] (0/2) Saving checkpoint to pruned_transducer_stateless7/exp1119-smaller-md1500/checkpoint-132000.pt 2023-05-11 13:14:42,338 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.3532, 4.3191, 3.9907, 4.3235, 3.7311, 3.3123, 3.6896, 3.3342], device='cuda:0') 2023-05-11 13:14:52,821 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0060-62364-0_sp0.9 from training. Duration: 21.361125 2023-05-11 13:14:58,725 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0079-62383-0_sp1.1 from training. Duration: 27.0318125 2023-05-11 13:15:01,496 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.595e+02 3.110e+02 3.594e+02 4.455e+02 8.381e+02, threshold=7.188e+02, percent-clipped=3.0 2023-05-11 13:15:08,918 WARNING [train.py:1182] (0/2) Exclude cut with ID 5622-44585-0006-90525-0_sp0.9 from training. Duration: 28.638875 2023-05-11 13:15:25,410 WARNING [train.py:1182] (0/2) Exclude cut with ID 3340-169293-0054-76830-0 from training. Duration: 20.4 2023-05-11 13:15:35,973 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.balancer1.prob, batch_count=660140.0, ans=0.125 2023-05-11 13:15:41,523 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer2.prob, batch_count=660190.0, ans=0.125 2023-05-11 13:15:54,871 INFO [train.py:1021] (0/2) Epoch 37, batch 1150, loss[loss=0.1735, simple_loss=0.2673, pruned_loss=0.03979, over 32078.00 frames. ], tot_loss[loss=0.1621, simple_loss=0.2554, pruned_loss=0.03447, over 7188991.31 frames. ], batch size: 170, lr: 2.96e-03, grad_scale: 32.0 2023-05-11 13:15:55,198 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module1.balancer1.prob, batch_count=660240.0, ans=0.125 2023-05-11 13:15:59,274 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0071-62375-0 from training. Duration: 20.025 2023-05-11 13:15:59,293 WARNING [train.py:1182] (0/2) Exclude cut with ID 2364-131735-0112-64612-0_sp0.9 from training. Duration: 20.488875 2023-05-11 13:16:05,053 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0079-62383-0 from training. Duration: 29.735 2023-05-11 13:16:28,706 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=4.93 vs. limit=12.0 2023-05-11 13:16:33,861 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=660340.0, ans=0.1 2023-05-11 13:16:38,189 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.bypass.skip_rate, batch_count=660390.0, ans=0.04949747468305833 2023-05-11 13:17:04,796 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass.skip_rate, batch_count=660440.0, ans=0.07 2023-05-11 13:17:09,202 INFO [train.py:1021] (0/2) Epoch 37, batch 1200, loss[loss=0.1628, simple_loss=0.2546, pruned_loss=0.03554, over 36904.00 frames. ], tot_loss[loss=0.1625, simple_loss=0.2558, pruned_loss=0.03463, over 7183022.68 frames. ], batch size: 100, lr: 2.96e-03, grad_scale: 32.0 2023-05-11 13:17:21,572 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=3.48 vs. limit=12.0 2023-05-11 13:17:26,790 WARNING [train.py:1182] (0/2) Exclude cut with ID 7276-92427-0014-12983-0_sp0.9 from training. Duration: 21.3055625 2023-05-11 13:17:28,248 WARNING [train.py:1182] (0/2) Exclude cut with ID 1025-75365-0008-79168-0_sp0.9 from training. Duration: 22.0666875 2023-05-11 13:17:29,555 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.307e+02 3.100e+02 3.557e+02 4.401e+02 8.653e+02, threshold=7.113e+02, percent-clipped=1.0 2023-05-11 13:17:30,295 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=1.77 vs. limit=6.0 2023-05-11 13:17:57,891 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass_mid.scale_min, batch_count=660640.0, ans=0.2 2023-05-11 13:17:57,997 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([3.7384, 3.1925, 2.3138, 2.5970], device='cuda:0') 2023-05-11 13:18:01,770 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=256, metric=8.57 vs. limit=22.5 2023-05-11 13:18:04,197 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.const_attention_rate, batch_count=660640.0, ans=0.025 2023-05-11 13:18:22,927 INFO [train.py:1021] (0/2) Epoch 37, batch 1250, loss[loss=0.1402, simple_loss=0.2316, pruned_loss=0.02439, over 36774.00 frames. ], tot_loss[loss=0.1622, simple_loss=0.2555, pruned_loss=0.03447, over 7179635.91 frames. ], batch size: 89, lr: 2.96e-03, grad_scale: 32.0 2023-05-11 13:18:23,242 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.out_combiner.scale_min, batch_count=660740.0, ans=0.2 2023-05-11 13:19:17,197 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0062-62366-0 from training. Duration: 20.26 2023-05-11 13:19:24,593 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=660940.0, ans=0.1 2023-05-11 13:19:29,947 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0030-9324-0_sp0.9 from training. Duration: 21.3444375 2023-05-11 13:19:37,271 INFO [train.py:1021] (0/2) Epoch 37, batch 1300, loss[loss=0.1949, simple_loss=0.2794, pruned_loss=0.0552, over 23838.00 frames. ], tot_loss[loss=0.1632, simple_loss=0.2568, pruned_loss=0.03482, over 7141001.16 frames. ], batch size: 234, lr: 2.96e-03, grad_scale: 32.0 2023-05-11 13:19:50,346 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=256, metric=10.38 vs. limit=22.5 2023-05-11 13:19:58,227 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.467e+02 2.904e+02 3.298e+02 3.791e+02 6.244e+02, threshold=6.596e+02, percent-clipped=0.0 2023-05-11 13:20:24,571 WARNING [train.py:1182] (0/2) Exclude cut with ID 497-129325-0061-62254-0_sp1.1 from training. Duration: 0.97725 2023-05-11 13:20:42,885 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=4.80 vs. limit=15.0 2023-05-11 13:20:45,517 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.ff3_skip_rate, batch_count=661190.0, ans=0.0 2023-05-11 13:20:51,005 INFO [train.py:1021] (0/2) Epoch 37, batch 1350, loss[loss=0.1684, simple_loss=0.2617, pruned_loss=0.03754, over 36908.00 frames. ], tot_loss[loss=0.1629, simple_loss=0.2564, pruned_loss=0.03472, over 7146620.44 frames. ], batch size: 100, lr: 2.96e-03, grad_scale: 32.0 2023-05-11 13:21:07,598 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass.skip_rate, batch_count=661290.0, ans=0.07 2023-05-11 13:21:10,250 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0031-39906-0_sp0.9 from training. Duration: 22.97225 2023-05-11 13:21:12,068 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.ff2_skip_rate, batch_count=661290.0, ans=0.0 2023-05-11 13:21:13,365 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.const_attention_rate, batch_count=661290.0, ans=0.025 2023-05-11 13:21:25,067 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer1.prob, batch_count=661340.0, ans=0.125 2023-05-11 13:21:38,477 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0047-39922-0_sp0.9 from training. Duration: 21.97775 2023-05-11 13:21:52,803 WARNING [train.py:1182] (0/2) Exclude cut with ID 1112-1043-0006-89194-0_sp0.9 from training. Duration: 21.8333125 2023-05-11 13:21:59,097 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer2.prob, batch_count=661440.0, ans=0.125 2023-05-11 13:22:04,816 INFO [train.py:1021] (0/2) Epoch 37, batch 1400, loss[loss=0.1998, simple_loss=0.2826, pruned_loss=0.05847, over 24655.00 frames. ], tot_loss[loss=0.1634, simple_loss=0.2567, pruned_loss=0.03507, over 7131030.71 frames. ], batch size: 233, lr: 2.96e-03, grad_scale: 32.0 2023-05-11 13:22:04,874 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0031-94921-0 from training. Duration: 20.47 2023-05-11 13:22:15,148 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=661490.0, ans=0.1 2023-05-11 13:22:25,784 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.493e+02 3.031e+02 3.376e+02 4.076e+02 7.513e+02, threshold=6.752e+02, percent-clipped=1.0 2023-05-11 13:22:36,572 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 13:22:47,951 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer2.prob, batch_count=661640.0, ans=0.125 2023-05-11 13:23:13,053 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0037-39912-0_sp0.9 from training. Duration: 20.67225 2023-05-11 13:23:19,183 INFO [train.py:1021] (0/2) Epoch 37, batch 1450, loss[loss=0.1669, simple_loss=0.2629, pruned_loss=0.03542, over 36753.00 frames. ], tot_loss[loss=0.1628, simple_loss=0.2561, pruned_loss=0.03475, over 7157571.14 frames. ], batch size: 122, lr: 2.96e-03, grad_scale: 32.0 2023-05-11 13:23:19,445 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.ff2_skip_rate, batch_count=661740.0, ans=0.0 2023-05-11 13:23:32,527 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([1.9224, 2.6166, 4.2913, 2.8340], device='cuda:0') 2023-05-11 13:23:33,589 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0024-94914-0_sp0.9 from training. Duration: 25.2444375 2023-05-11 13:23:50,717 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.whiten_keys.whitening_limit, batch_count=661840.0, ans=6.0 2023-05-11 13:23:57,205 WARNING [train.py:1182] (0/2) Exclude cut with ID 3340-169293-0021-76797-0_sp0.9 from training. Duration: 21.1445 2023-05-11 13:24:09,529 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward3.hidden_balancer.prob, batch_count=661890.0, ans=0.125 2023-05-11 13:24:32,561 INFO [train.py:1021] (0/2) Epoch 37, batch 1500, loss[loss=0.1671, simple_loss=0.268, pruned_loss=0.03305, over 36868.00 frames. ], tot_loss[loss=0.1628, simple_loss=0.256, pruned_loss=0.03476, over 7143543.92 frames. ], batch size: 111, lr: 2.96e-03, grad_scale: 32.0 2023-05-11 13:24:52,636 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=3.99 vs. limit=10.0 2023-05-11 13:24:53,277 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.571e+02 3.074e+02 3.373e+02 4.017e+02 6.234e+02, threshold=6.746e+02, percent-clipped=0.0 2023-05-11 13:25:12,627 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0079-62383-0_sp0.9 from training. Duration: 33.038875 2023-05-11 13:25:46,281 INFO [train.py:1021] (0/2) Epoch 37, batch 1550, loss[loss=0.1617, simple_loss=0.2598, pruned_loss=0.03178, over 37131.00 frames. ], tot_loss[loss=0.1623, simple_loss=0.2557, pruned_loss=0.03448, over 7176931.96 frames. ], batch size: 107, lr: 2.96e-03, grad_scale: 32.0 2023-05-11 13:25:46,580 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer1.prob, batch_count=662240.0, ans=0.125 2023-05-11 13:25:51,987 WARNING [train.py:1182] (0/2) Exclude cut with ID 6426-64291-0000-16059-0_sp0.9 from training. Duration: 20.0944375 2023-05-11 13:25:59,939 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module2.balancer1.prob, batch_count=662290.0, ans=0.125 2023-05-11 13:26:08,199 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=144, metric=9.03 vs. limit=10.0 2023-05-11 13:26:08,732 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0044-62348-0_sp1.1 from training. Duration: 20.4 2023-05-11 13:26:16,229 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=662340.0, ans=0.0 2023-05-11 13:26:17,347 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62851-0022-91297-0 from training. Duration: 20.085 2023-05-11 13:26:29,487 WARNING [train.py:1182] (0/2) Exclude cut with ID 4860-13185-0032-76709-0_sp0.9 from training. Duration: 23.07775 2023-05-11 13:27:00,323 INFO [train.py:1021] (0/2) Epoch 37, batch 1600, loss[loss=0.1753, simple_loss=0.2653, pruned_loss=0.04268, over 37024.00 frames. ], tot_loss[loss=0.1623, simple_loss=0.2558, pruned_loss=0.03443, over 7192418.80 frames. ], batch size: 116, lr: 2.96e-03, grad_scale: 32.0 2023-05-11 13:27:16,461 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0044-62348-0_sp0.9 from training. Duration: 24.9333125 2023-05-11 13:27:20,150 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer1.prob, batch_count=662540.0, ans=0.125 2023-05-11 13:27:21,296 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.478e+02 3.138e+02 3.598e+02 4.428e+02 8.077e+02, threshold=7.195e+02, percent-clipped=2.0 2023-05-11 13:27:23,211 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass.skip_rate, batch_count=662540.0, ans=0.07 2023-05-11 13:28:02,651 WARNING [train.py:1182] (0/2) Exclude cut with ID 5118-111612-0016-124680-0_sp0.9 from training. Duration: 20.388875 2023-05-11 13:28:08,477 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0017-62487-0_sp1.1 from training. Duration: 20.3590625 2023-05-11 13:28:14,941 INFO [train.py:1021] (0/2) Epoch 37, batch 1650, loss[loss=0.1595, simple_loss=0.2505, pruned_loss=0.03427, over 37177.00 frames. ], tot_loss[loss=0.1625, simple_loss=0.2561, pruned_loss=0.03447, over 7188271.27 frames. ], batch size: 93, lr: 2.96e-03, grad_scale: 32.0 2023-05-11 13:28:48,970 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=662840.0, ans=0.1 2023-05-11 13:29:03,259 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module2.balancer2.min_abs, batch_count=662890.0, ans=0.5 2023-05-11 13:29:10,866 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer2.min_positive, batch_count=662890.0, ans=0.05 2023-05-11 13:29:20,741 WARNING [train.py:1182] (0/2) Exclude cut with ID 3557-8342-0013-54691-0_sp1.1 from training. Duration: 0.836375 2023-05-11 13:29:27,791 INFO [train.py:1021] (0/2) Epoch 37, batch 1700, loss[loss=0.1573, simple_loss=0.2522, pruned_loss=0.03121, over 37146.00 frames. ], tot_loss[loss=0.1625, simple_loss=0.2557, pruned_loss=0.03467, over 7195710.55 frames. ], batch size: 102, lr: 2.96e-03, grad_scale: 32.0 2023-05-11 13:29:48,836 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.610e+02 3.316e+02 3.818e+02 4.780e+02 7.668e+02, threshold=7.635e+02, percent-clipped=1.0 2023-05-11 13:29:56,642 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.5168, 3.5972, 4.0047, 3.5204], device='cuda:0') 2023-05-11 13:30:04,078 WARNING [train.py:1182] (0/2) Exclude cut with ID 8565-290391-0049-67394-0_sp0.9 from training. Duration: 21.3166875 2023-05-11 13:30:04,285 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer2.prob, batch_count=663090.0, ans=0.125 2023-05-11 13:30:12,914 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=663140.0, ans=0.0 2023-05-11 13:30:34,969 WARNING [train.py:1182] (0/2) Exclude cut with ID 6533-399-0029-104863-0_sp0.9 from training. Duration: 22.1055625 2023-05-11 13:30:42,167 INFO [train.py:1021] (0/2) Epoch 37, batch 1750, loss[loss=0.1858, simple_loss=0.2771, pruned_loss=0.04725, over 35854.00 frames. ], tot_loss[loss=0.1638, simple_loss=0.2563, pruned_loss=0.03566, over 7172007.76 frames. ], batch size: 133, lr: 2.96e-03, grad_scale: 32.0 2023-05-11 13:30:46,610 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0094-26379-0_sp1.1 from training. Duration: 21.77725 2023-05-11 13:31:05,898 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0005-134304-0_sp0.9 from training. Duration: 27.8166875 2023-05-11 13:31:20,363 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=8.82 vs. limit=22.5 2023-05-11 13:31:27,401 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_skip_rate, batch_count=663390.0, ans=0.0 2023-05-11 13:31:27,487 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.balancer1.prob, batch_count=663390.0, ans=0.125 2023-05-11 13:31:30,275 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward2.hidden_balancer.prob, batch_count=663390.0, ans=0.125 2023-05-11 13:31:31,493 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0021-15852-0_sp1.1 from training. Duration: 22.5090625 2023-05-11 13:31:37,383 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0005-134304-0 from training. Duration: 25.035 2023-05-11 13:31:39,822 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=5.19 vs. limit=15.0 2023-05-11 13:31:43,979 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass.skip_rate, batch_count=663440.0, ans=0.04949747468305833 2023-05-11 13:31:54,077 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module1.balancer2.prob, batch_count=663440.0, ans=0.125 2023-05-11 13:31:54,230 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.5605, 3.5084, 3.3753, 4.1757, 2.3156, 3.6312, 4.2134, 3.5814], device='cuda:0') 2023-05-11 13:31:56,753 INFO [train.py:1021] (0/2) Epoch 37, batch 1800, loss[loss=0.1812, simple_loss=0.2722, pruned_loss=0.04512, over 37070.00 frames. ], tot_loss[loss=0.1653, simple_loss=0.2572, pruned_loss=0.03673, over 7153287.36 frames. ], batch size: 116, lr: 2.96e-03, grad_scale: 32.0 2023-05-11 13:31:56,847 WARNING [train.py:1182] (0/2) Exclude cut with ID 774-127930-0014-10412-0_sp1.1 from training. Duration: 0.95 2023-05-11 13:32:04,578 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_skip_rate, batch_count=663490.0, ans=0.0 2023-05-11 13:32:07,868 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.5179, 5.3580, 4.7073, 5.1466], device='cuda:0') 2023-05-11 13:32:14,791 WARNING [train.py:1182] (0/2) Exclude cut with ID 3033-130750-0096-55598-0_sp0.9 from training. Duration: 0.92225 2023-05-11 13:32:17,505 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.658e+02 3.403e+02 3.881e+02 4.719e+02 7.121e+02, threshold=7.762e+02, percent-clipped=0.0 2023-05-11 13:32:42,793 WARNING [train.py:1182] (0/2) Exclude cut with ID 4511-76322-0006-80011-0 from training. Duration: 21.97 2023-05-11 13:32:57,665 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=256, metric=8.14 vs. limit=22.5 2023-05-11 13:33:02,007 WARNING [train.py:1182] (0/2) Exclude cut with ID 7492-105653-0055-62765-0_sp0.9 from training. Duration: 21.97225 2023-05-11 13:33:03,371 WARNING [train.py:1182] (0/2) Exclude cut with ID 453-131332-0000-47844-0_sp0.9 from training. Duration: 25.3333125 2023-05-11 13:33:03,664 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=663690.0, ans=0.0 2023-05-11 13:33:09,425 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.balancer.min_positive, batch_count=663740.0, ans=0.05 2023-05-11 13:33:10,585 INFO [train.py:1021] (0/2) Epoch 37, batch 1850, loss[loss=0.1712, simple_loss=0.2639, pruned_loss=0.03921, over 37010.00 frames. ], tot_loss[loss=0.1662, simple_loss=0.2573, pruned_loss=0.03751, over 7160035.72 frames. ], batch size: 104, lr: 2.96e-03, grad_scale: 16.0 2023-05-11 13:33:15,171 WARNING [train.py:1182] (0/2) Exclude cut with ID 5172-29468-0015-19128-0_sp0.9 from training. Duration: 21.5055625 2023-05-11 13:33:16,336 INFO [scaling.py:969] (0/2) Whitening: name=encoder_embed.convnext.out_whiten, num_groups=1, num_channels=128, metric=4.45 vs. limit=5.0 2023-05-11 13:33:25,418 WARNING [train.py:1182] (0/2) Exclude cut with ID 453-131332-0000-47844-0_sp1.1 from training. Duration: 20.72725 2023-05-11 13:33:33,571 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=663790.0, ans=0.1 2023-05-11 13:33:58,375 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0030-130156-0_sp0.9 from training. Duration: 26.32775 2023-05-11 13:34:00,116 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.9349, 4.0323, 4.5696, 4.7936], device='cuda:0') 2023-05-11 13:34:11,456 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer1.prob, batch_count=663940.0, ans=0.125 2023-05-11 13:34:24,724 INFO [train.py:1021] (0/2) Epoch 37, batch 1900, loss[loss=0.17, simple_loss=0.2572, pruned_loss=0.04137, over 36936.00 frames. ], tot_loss[loss=0.1668, simple_loss=0.2571, pruned_loss=0.03827, over 7170754.10 frames. ], batch size: 100, lr: 2.95e-03, grad_scale: 16.0 2023-05-11 13:34:30,459 WARNING [train.py:1182] (0/2) Exclude cut with ID 3867-173237-0077-144769-0 from training. Duration: 20.025 2023-05-11 13:34:34,761 WARNING [train.py:1182] (0/2) Exclude cut with ID 6709-74022-0004-86860-0_sp1.1 from training. Duration: 0.9409375 2023-05-11 13:34:36,220 WARNING [train.py:1182] (0/2) Exclude cut with ID 4757-1811-0023-62229-0_sp0.9 from training. Duration: 21.37775 2023-05-11 13:34:39,279 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=664040.0, ans=0.125 2023-05-11 13:34:46,588 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.940e+02 3.429e+02 3.727e+02 4.226e+02 6.272e+02, threshold=7.454e+02, percent-clipped=0.0 2023-05-11 13:34:55,508 WARNING [train.py:1182] (0/2) Exclude cut with ID 1250-135782-0004-25974-0_sp0.9 from training. Duration: 21.17225 2023-05-11 13:34:55,518 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0021-15852-0_sp0.9 from training. Duration: 27.511125 2023-05-11 13:35:16,430 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_skip_rate, batch_count=664140.0, ans=0.0 2023-05-11 13:35:25,020 WARNING [train.py:1182] (0/2) Exclude cut with ID 453-131332-0000-47844-0 from training. Duration: 22.8 2023-05-11 13:35:30,726 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0040-44509-0 from training. Duration: 22.585 2023-05-11 13:35:38,522 INFO [train.py:1021] (0/2) Epoch 37, batch 1950, loss[loss=0.1849, simple_loss=0.2724, pruned_loss=0.04867, over 37024.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2575, pruned_loss=0.03889, over 7139111.98 frames. ], batch size: 116, lr: 2.95e-03, grad_scale: 16.0 2023-05-11 13:35:41,725 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_skip_rate, batch_count=664240.0, ans=0.0 2023-05-11 13:36:00,686 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0001-146967-0_sp0.9 from training. Duration: 22.0166875 2023-05-11 13:36:05,232 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.hidden_balancer.prob, batch_count=664290.0, ans=0.125 2023-05-11 13:36:05,423 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.3154, 4.4018, 2.2725, 2.4472], device='cuda:0') 2023-05-11 13:36:07,349 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=2.57 vs. limit=6.0 2023-05-11 13:36:18,218 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0003-134302-0_sp1.1 from training. Duration: 24.395375 2023-05-11 13:36:21,454 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([1.9490, 2.8968, 2.7062, 2.7437, 2.5936, 2.3745, 2.7165, 2.3518], device='cuda:0') 2023-05-11 13:36:24,065 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0017-41203-0_sp0.9 from training. Duration: 27.47775 2023-05-11 13:36:27,776 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=664390.0, ans=0.1 2023-05-11 13:36:28,936 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0017-62487-0_sp0.9 from training. Duration: 24.8833125 2023-05-11 13:36:31,765 WARNING [train.py:1182] (0/2) Exclude cut with ID 6758-72288-0033-108368-0 from training. Duration: 23.39 2023-05-11 13:36:37,675 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0007-12994-0_sp0.9 from training. Duration: 28.72225 2023-05-11 13:36:39,351 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=664440.0, ans=0.125 2023-05-11 13:36:45,053 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.attention_skip_rate, batch_count=664440.0, ans=0.0 2023-05-11 13:36:47,704 WARNING [train.py:1182] (0/2) Exclude cut with ID 585-294811-0110-133686-0_sp0.9 from training. Duration: 20.8944375 2023-05-11 13:36:52,467 INFO [train.py:1021] (0/2) Epoch 37, batch 2000, loss[loss=0.1688, simple_loss=0.2595, pruned_loss=0.03908, over 36686.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2581, pruned_loss=0.03968, over 7124458.74 frames. ], batch size: 118, lr: 2.95e-03, grad_scale: 32.0 2023-05-11 13:37:02,614 WARNING [train.py:1182] (0/2) Exclude cut with ID 5796-66357-0007-116447-0_sp0.9 from training. Duration: 23.8444375 2023-05-11 13:37:13,916 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.801e+02 3.502e+02 3.995e+02 4.633e+02 7.963e+02, threshold=7.989e+02, percent-clipped=1.0 2023-05-11 13:37:27,693 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0007-12994-0 from training. Duration: 25.85 2023-05-11 13:37:27,708 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0023-13010-0 from training. Duration: 21.39 2023-05-11 13:37:39,074 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0014-15845-0 from training. Duration: 27.92 2023-05-11 13:37:57,285 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=664690.0, ans=0.0 2023-05-11 13:38:04,391 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0039-130165-0_sp0.9 from training. Duration: 20.661125 2023-05-11 13:38:04,560 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.hidden_balancer.prob, batch_count=664740.0, ans=0.125 2023-05-11 13:38:05,725 INFO [train.py:1021] (0/2) Epoch 37, batch 2050, loss[loss=0.1792, simple_loss=0.2683, pruned_loss=0.04507, over 35894.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2589, pruned_loss=0.04041, over 7101030.47 frames. ], batch size: 133, lr: 2.95e-03, grad_scale: 32.0 2023-05-11 13:38:18,294 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=664740.0, ans=0.0 2023-05-11 13:38:28,489 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.balancer.max_positive, batch_count=664790.0, ans=0.95 2023-05-11 13:38:29,651 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0043-15874-0_sp0.9 from training. Duration: 20.07225 2023-05-11 13:38:31,716 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=5.51 vs. limit=15.0 2023-05-11 13:38:36,005 WARNING [train.py:1182] (0/2) Exclude cut with ID 1085-156170-0017-128270-0 from training. Duration: 21.01 2023-05-11 13:38:37,648 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer2.prob, batch_count=664840.0, ans=0.125 2023-05-11 13:38:44,960 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer2.prob, batch_count=664840.0, ans=0.125 2023-05-11 13:38:47,723 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.out_combiner.scale_min, batch_count=664840.0, ans=0.2 2023-05-11 13:38:56,821 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=14.96 vs. limit=15.0 2023-05-11 13:39:14,478 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_skip_rate, batch_count=664940.0, ans=0.0 2023-05-11 13:39:17,893 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=7.90 vs. limit=22.5 2023-05-11 13:39:19,941 INFO [train.py:1021] (0/2) Epoch 37, batch 2100, loss[loss=0.1551, simple_loss=0.2403, pruned_loss=0.03499, over 37085.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2577, pruned_loss=0.0405, over 7096379.87 frames. ], batch size: 94, lr: 2.95e-03, grad_scale: 32.0 2023-05-11 13:39:20,270 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.2817, 4.6089, 4.7891, 4.5046], device='cuda:0') 2023-05-11 13:39:42,293 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.750e+02 3.625e+02 4.259e+02 5.283e+02 7.932e+02, threshold=8.518e+02, percent-clipped=0.0 2023-05-11 13:39:46,712 WARNING [train.py:1182] (0/2) Exclude cut with ID 2195-150901-0045-59933-0 from training. Duration: 20.65 2023-05-11 13:39:53,117 WARNING [train.py:1182] (0/2) Exclude cut with ID 5796-66357-0007-116447-0 from training. Duration: 21.46 2023-05-11 13:39:57,630 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.balancer2.prob, batch_count=665090.0, ans=0.125 2023-05-11 13:40:00,444 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass_mid.scale_min, batch_count=665090.0, ans=0.2 2023-05-11 13:40:24,161 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([4.9128, 3.9987, 4.4766, 4.5427], device='cuda:0') 2023-05-11 13:40:29,834 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.ff3_skip_rate, batch_count=665190.0, ans=0.0 2023-05-11 13:40:33,946 INFO [train.py:1021] (0/2) Epoch 37, batch 2150, loss[loss=0.1618, simple_loss=0.2451, pruned_loss=0.03925, over 36943.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2572, pruned_loss=0.04049, over 7137217.92 frames. ], batch size: 95, lr: 2.95e-03, grad_scale: 32.0 2023-05-11 13:40:37,540 WARNING [train.py:1182] (0/2) Exclude cut with ID 3557-8342-0013-54691-0 from training. Duration: 0.92 2023-05-11 13:40:44,696 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0023-13010-0_sp0.9 from training. Duration: 23.7666875 2023-05-11 13:41:21,323 WARNING [train.py:1182] (0/2) Exclude cut with ID 8544-281189-0060-101339-0_sp0.9 from training. Duration: 20.861125 2023-05-11 13:41:30,880 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.out_combiner.scale_min, batch_count=665390.0, ans=0.2 2023-05-11 13:41:32,002 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-65654-0031-41259-0_sp0.9 from training. Duration: 22.711125 2023-05-11 13:41:47,909 INFO [train.py:1021] (0/2) Epoch 37, batch 2200, loss[loss=0.162, simple_loss=0.2458, pruned_loss=0.03905, over 36966.00 frames. ], tot_loss[loss=0.169, simple_loss=0.257, pruned_loss=0.04051, over 7146136.92 frames. ], batch size: 95, lr: 2.95e-03, grad_scale: 32.0 2023-05-11 13:42:07,609 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_skip_rate, batch_count=665540.0, ans=0.0 2023-05-11 13:42:10,277 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.859e+02 3.470e+02 3.932e+02 4.513e+02 7.940e+02, threshold=7.864e+02, percent-clipped=0.0 2023-05-11 13:42:16,083 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0043-132310-0_sp1.1 from training. Duration: 22.986375 2023-05-11 13:42:30,730 WARNING [train.py:1182] (0/2) Exclude cut with ID 8040-260924-0003-80960-0_sp0.9 from training. Duration: 22.07225 2023-05-11 13:42:35,261 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=665640.0, ans=0.125 2023-05-11 13:42:36,459 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0045-26330-0_sp0.9 from training. Duration: 20.3055625 2023-05-11 13:42:39,569 WARNING [train.py:1182] (0/2) Exclude cut with ID 6356-271890-0060-94317-0_sp0.9 from training. Duration: 20.72225 2023-05-11 13:42:48,586 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=665690.0, ans=0.125 2023-05-11 13:42:51,233 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.const_attention_rate, batch_count=665690.0, ans=0.025 2023-05-11 13:42:55,931 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module2.whiten.whitening_limit, batch_count=665690.0, ans=15.0 2023-05-11 13:42:58,749 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0017-41203-0_sp1.1 from training. Duration: 22.4818125 2023-05-11 13:42:58,991 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([6.1707, 5.3690, 5.4598, 6.0282], device='cuda:0') 2023-05-11 13:43:00,393 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer2.prob, batch_count=665740.0, ans=0.125 2023-05-11 13:43:01,663 INFO [train.py:1021] (0/2) Epoch 37, batch 2250, loss[loss=0.1794, simple_loss=0.2718, pruned_loss=0.04353, over 32649.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2562, pruned_loss=0.0402, over 7161222.68 frames. ], batch size: 170, lr: 2.95e-03, grad_scale: 32.0 2023-05-11 13:43:04,987 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.6039, 3.3983, 3.1541, 3.9845, 2.4683, 3.4468, 4.0243, 3.4854], device='cuda:0') 2023-05-11 13:43:24,057 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0040-44509-0_sp0.9 from training. Duration: 25.0944375 2023-05-11 13:43:27,060 WARNING [train.py:1182] (0/2) Exclude cut with ID 6533-399-0047-104881-0 from training. Duration: 21.515 2023-05-11 13:43:32,859 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0009-15840-0_sp0.9 from training. Duration: 27.02225 2023-05-11 13:43:38,769 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0010-62480-0_sp0.9 from training. Duration: 22.22225 2023-05-11 13:43:46,048 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0085-44554-0_sp0.9 from training. Duration: 20.85 2023-05-11 13:43:49,649 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass_mid.scale_min, batch_count=665890.0, ans=0.2 2023-05-11 13:43:59,725 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.balancer1.prob, batch_count=665940.0, ans=0.125 2023-05-11 13:44:15,963 INFO [train.py:1021] (0/2) Epoch 37, batch 2300, loss[loss=0.1604, simple_loss=0.245, pruned_loss=0.03792, over 36935.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.2568, pruned_loss=0.04038, over 7175939.14 frames. ], batch size: 95, lr: 2.95e-03, grad_scale: 32.0 2023-05-11 13:44:17,647 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder_embed.convnext.layerdrop_rate, batch_count=665990.0, ans=0.015 2023-05-11 13:44:17,813 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=665990.0, ans=0.0 2023-05-11 13:44:19,077 WARNING [train.py:1182] (0/2) Exclude cut with ID 4295-39940-0007-92567-0 from training. Duration: 21.54 2023-05-11 13:44:19,741 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=256, metric=2.97 vs. limit=15.0 2023-05-11 13:44:23,290 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0040-44509-0_sp1.1 from training. Duration: 20.5318125 2023-05-11 13:44:33,502 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0012-134311-0_sp0.9 from training. Duration: 21.9333125 2023-05-11 13:44:37,621 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.919e+02 3.442e+02 3.770e+02 4.439e+02 6.300e+02, threshold=7.540e+02, percent-clipped=0.0 2023-05-11 13:44:48,950 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module1.balancer2.prob, batch_count=666090.0, ans=0.125 2023-05-11 13:45:22,774 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0025-130151-0_sp0.9 from training. Duration: 21.7944375 2023-05-11 13:45:22,993 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=666190.0, ans=0.125 2023-05-11 13:45:30,055 INFO [train.py:1021] (0/2) Epoch 37, batch 2350, loss[loss=0.1574, simple_loss=0.2447, pruned_loss=0.0351, over 36974.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2566, pruned_loss=0.04044, over 7167834.69 frames. ], batch size: 95, lr: 2.95e-03, grad_scale: 16.0 2023-05-11 13:45:35,073 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0002-12989-0_sp0.9 from training. Duration: 22.4666875 2023-05-11 13:45:42,216 WARNING [train.py:1182] (0/2) Exclude cut with ID 6121-9014-0076-24124-0 from training. Duration: 21.635 2023-05-11 13:45:48,578 WARNING [train.py:1182] (0/2) Exclude cut with ID 6121-9014-0076-24124-0_sp0.9 from training. Duration: 24.038875 2023-05-11 13:46:10,303 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=666340.0, ans=0.1 2023-05-11 13:46:17,586 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_skip_rate, batch_count=666390.0, ans=0.0 2023-05-11 13:46:18,928 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward2.hidden_balancer.prob, batch_count=666390.0, ans=0.125 2023-05-11 13:46:32,509 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0009-134308-0_sp1.1 from training. Duration: 21.786375 2023-05-11 13:46:40,827 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 13:46:44,721 INFO [train.py:1021] (0/2) Epoch 37, batch 2400, loss[loss=0.1983, simple_loss=0.2762, pruned_loss=0.06023, over 23697.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.256, pruned_loss=0.04031, over 7155330.56 frames. ], batch size: 233, lr: 2.95e-03, grad_scale: 32.0 2023-05-11 13:46:44,813 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0002-12989-0 from training. Duration: 20.22 2023-05-11 13:46:59,637 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.4686, 4.7093, 2.3878, 2.5999], device='cuda:0') 2023-05-11 13:47:07,309 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=7.93 vs. limit=22.5 2023-05-11 13:47:08,040 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.948e+02 3.407e+02 3.753e+02 4.265e+02 6.174e+02, threshold=7.506e+02, percent-clipped=0.0 2023-05-11 13:47:37,750 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.0.self_attn_weights, attn_weights_entropy = tensor([6.3416, 5.6807, 5.4505, 6.0998], device='cuda:0') 2023-05-11 13:47:57,788 INFO [train.py:1021] (0/2) Epoch 37, batch 2450, loss[loss=0.1702, simple_loss=0.2618, pruned_loss=0.03925, over 37098.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2559, pruned_loss=0.04042, over 7140974.26 frames. ], batch size: 103, lr: 2.95e-03, grad_scale: 32.0 2023-05-11 13:48:29,563 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.ff3_skip_rate, batch_count=666840.0, ans=0.0 2023-05-11 13:48:43,630 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0043-132310-0 from training. Duration: 25.285 2023-05-11 13:49:03,275 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module2.balancer1.prob, batch_count=666940.0, ans=0.125 2023-05-11 13:49:08,664 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=256, metric=9.67 vs. limit=22.5 2023-05-11 13:49:12,279 INFO [train.py:1021] (0/2) Epoch 37, batch 2500, loss[loss=0.1527, simple_loss=0.2336, pruned_loss=0.03594, over 37012.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2557, pruned_loss=0.04025, over 7139300.39 frames. ], batch size: 86, lr: 2.95e-03, grad_scale: 32.0 2023-05-11 13:49:34,470 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.balancer2.prob, batch_count=667040.0, ans=0.125 2023-05-11 13:49:35,451 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.890e+02 3.356e+02 3.767e+02 4.440e+02 6.517e+02, threshold=7.534e+02, percent-clipped=0.0 2023-05-11 13:49:41,508 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer1.prob, batch_count=667090.0, ans=0.125 2023-05-11 13:49:47,382 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer2.prob, batch_count=667090.0, ans=0.125 2023-05-11 13:49:52,404 WARNING [train.py:1182] (0/2) Exclude cut with ID 811-130148-0001-63453-0_sp0.9 from training. Duration: 20.861125 2023-05-11 13:50:00,293 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.ff2_skip_rate, batch_count=667140.0, ans=0.0 2023-05-11 13:50:11,564 WARNING [train.py:1182] (0/2) Exclude cut with ID 6010-56788-0055-90261-0 from training. Duration: 20.88 2023-05-11 13:50:25,892 INFO [train.py:1021] (0/2) Epoch 37, batch 2550, loss[loss=0.1517, simple_loss=0.2358, pruned_loss=0.03382, over 37092.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2558, pruned_loss=0.0402, over 7149290.40 frames. ], batch size: 94, lr: 2.95e-03, grad_scale: 32.0 2023-05-11 13:50:39,798 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=3.88 vs. limit=10.0 2023-05-11 13:50:43,944 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0045-15876-0_sp0.9 from training. Duration: 23.4166875 2023-05-11 13:51:06,330 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.out_combiner.scale_min, batch_count=667340.0, ans=0.2 2023-05-11 13:51:40,098 INFO [train.py:1021] (0/2) Epoch 37, batch 2600, loss[loss=0.1595, simple_loss=0.2397, pruned_loss=0.03962, over 36185.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.2567, pruned_loss=0.04052, over 7134327.16 frames. ], batch size: 80, lr: 2.95e-03, grad_scale: 32.0 2023-05-11 13:51:51,909 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.const_attention_rate, batch_count=667490.0, ans=0.025 2023-05-11 13:52:00,253 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0006-134305-0 from training. Duration: 21.24 2023-05-11 13:52:00,274 WARNING [train.py:1182] (0/2) Exclude cut with ID 6533-399-0047-104881-0_sp0.9 from training. Duration: 23.9055625 2023-05-11 13:52:03,156 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.894e+02 3.372e+02 3.623e+02 4.165e+02 6.432e+02, threshold=7.246e+02, percent-clipped=0.0 2023-05-11 13:52:09,335 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff2_skip_rate, batch_count=667590.0, ans=0.0 2023-05-11 13:52:27,734 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.8405, 5.0159, 5.1762, 5.6988], device='cuda:0') 2023-05-11 13:52:34,693 WARNING [train.py:1182] (0/2) Exclude cut with ID 6758-72288-0033-108368-0_sp0.9 from training. Duration: 25.988875 2023-05-11 13:52:43,204 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0001-134300-0_sp0.9 from training. Duration: 20.67225 2023-05-11 13:52:53,158 INFO [train.py:1021] (0/2) Epoch 37, batch 2650, loss[loss=0.1719, simple_loss=0.2621, pruned_loss=0.04078, over 37076.00 frames. ], tot_loss[loss=0.1694, simple_loss=0.2574, pruned_loss=0.04073, over 7134492.22 frames. ], batch size: 103, lr: 2.95e-03, grad_scale: 32.0 2023-05-11 13:52:53,468 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer2.prob, batch_count=667740.0, ans=0.125 2023-05-11 13:52:59,247 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=667740.0, ans=0.1 2023-05-11 13:53:03,591 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module1.balancer1.max_abs, batch_count=667740.0, ans=10.0 2023-05-11 13:53:19,614 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_skip_rate, batch_count=667790.0, ans=0.0 2023-05-11 13:53:20,993 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.ff2_skip_rate, batch_count=667790.0, ans=0.0 2023-05-11 13:53:24,625 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=14.20 vs. limit=22.5 2023-05-11 13:53:29,704 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer1.prob, batch_count=667840.0, ans=0.125 2023-05-11 13:53:30,879 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0038-41224-0 from training. Duration: 20.34 2023-05-11 13:53:35,498 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass.scale_min, batch_count=667840.0, ans=0.2 2023-05-11 13:53:41,388 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.8808, 3.7069, 4.2535, 4.3015], device='cuda:0') 2023-05-11 13:53:44,319 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([3.7569, 3.1394, 3.3731, 3.3533], device='cuda:0') 2023-05-11 13:54:05,853 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.2874, 4.1341, 3.8763, 4.1390, 3.4875, 3.2173, 3.5900, 3.1535], device='cuda:0') 2023-05-11 13:54:07,469 INFO [train.py:1021] (0/2) Epoch 37, batch 2700, loss[loss=0.1668, simple_loss=0.2516, pruned_loss=0.04096, over 36926.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2558, pruned_loss=0.04039, over 7127941.79 frames. ], batch size: 100, lr: 2.95e-03, grad_scale: 16.0 2023-05-11 13:54:25,650 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer2.prob, batch_count=668040.0, ans=0.125 2023-05-11 13:54:32,433 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.702e+02 3.507e+02 3.970e+02 4.808e+02 7.401e+02, threshold=7.940e+02, percent-clipped=1.0 2023-05-11 13:54:32,603 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder_embed.convnext.out_balancer.prob, batch_count=668040.0, ans=0.125 2023-05-11 13:54:35,942 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=11.49 vs. limit=15.0 2023-05-11 13:54:40,019 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.const_attention_rate, batch_count=668090.0, ans=0.025 2023-05-11 13:54:42,693 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0026-15857-0_sp0.9 from training. Duration: 25.061125 2023-05-11 13:54:48,697 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer1.prob, batch_count=668090.0, ans=0.125 2023-05-11 13:54:54,282 WARNING [train.py:1182] (0/2) Exclude cut with ID 3033-130750-0096-55598-0 from training. Duration: 0.83 2023-05-11 13:54:54,547 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=668140.0, ans=0.1 2023-05-11 13:54:59,481 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=668140.0, ans=0.1 2023-05-11 13:55:05,854 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.2.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 13:55:13,158 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer2.prob, batch_count=668190.0, ans=0.125 2023-05-11 13:55:13,161 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.ff2_skip_rate, batch_count=668190.0, ans=0.0 2023-05-11 13:55:21,578 INFO [train.py:1021] (0/2) Epoch 37, batch 2750, loss[loss=0.1551, simple_loss=0.2423, pruned_loss=0.03391, over 36863.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2556, pruned_loss=0.04001, over 7164552.87 frames. ], batch size: 96, lr: 2.95e-03, grad_scale: 16.0 2023-05-11 13:55:21,694 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0017-41203-0 from training. Duration: 24.73 2023-05-11 13:55:34,640 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0009-134308-0 from training. Duration: 23.965 2023-05-11 13:55:43,275 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0030-146996-0_sp0.9 from training. Duration: 22.088875 2023-05-11 13:56:01,866 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0006-134305-0_sp0.9 from training. Duration: 23.6 2023-05-11 13:56:33,870 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.6300, 3.6007, 4.0253, 3.5838], device='cuda:0') 2023-05-11 13:56:34,851 INFO [train.py:1021] (0/2) Epoch 37, batch 2800, loss[loss=0.1886, simple_loss=0.2766, pruned_loss=0.05034, over 37044.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.256, pruned_loss=0.04013, over 7169193.90 frames. ], batch size: 116, lr: 2.94e-03, grad_scale: 32.0 2023-05-11 13:56:53,714 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=668540.0, ans=0.125 2023-05-11 13:57:00,479 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.725e+02 3.403e+02 3.803e+02 4.375e+02 6.808e+02, threshold=7.606e+02, percent-clipped=0.0 2023-05-11 13:57:02,213 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=668540.0, ans=0.0 2023-05-11 13:57:03,665 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=668590.0, ans=0.1 2023-05-11 13:57:09,976 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.whiten, num_groups=1, num_channels=256, metric=3.04 vs. limit=12.0 2023-05-11 13:57:29,759 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=256, metric=3.17 vs. limit=15.0 2023-05-11 13:57:46,382 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0024-13011-0 from training. Duration: 23.795 2023-05-11 13:57:48,988 INFO [train.py:1021] (0/2) Epoch 37, batch 2850, loss[loss=0.1632, simple_loss=0.2514, pruned_loss=0.03746, over 36854.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2556, pruned_loss=0.03999, over 7186530.75 frames. ], batch size: 96, lr: 2.94e-03, grad_scale: 32.0 2023-05-11 13:58:02,142 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0030-130156-0_sp1.1 from training. Duration: 21.5409375 2023-05-11 13:58:04,960 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0019-146985-0_sp0.9 from training. Duration: 24.97775 2023-05-11 13:58:11,126 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.5620, 3.5810, 3.3849, 4.2662, 2.9460, 3.6589, 4.2904, 3.7036], device='cuda:0') 2023-05-11 13:58:16,657 WARNING [train.py:1182] (0/2) Exclude cut with ID 1085-156170-0017-128270-0_sp0.9 from training. Duration: 23.3444375 2023-05-11 13:58:39,972 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=668890.0, ans=0.125 2023-05-11 13:58:45,476 WARNING [train.py:1182] (0/2) Exclude cut with ID 6010-56788-0055-90261-0_sp0.9 from training. Duration: 23.2 2023-05-11 13:58:48,645 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.4271, 5.2462, 4.6376, 5.0062], device='cuda:0') 2023-05-11 13:58:52,534 WARNING [train.py:1182] (0/2) Exclude cut with ID 5653-46179-0060-117930-0_sp0.9 from training. Duration: 21.17225 2023-05-11 13:59:02,601 INFO [train.py:1021] (0/2) Epoch 37, batch 2900, loss[loss=0.1807, simple_loss=0.2746, pruned_loss=0.04344, over 35869.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2554, pruned_loss=0.04006, over 7165236.35 frames. ], batch size: 133, lr: 2.94e-03, grad_scale: 16.0 2023-05-11 13:59:04,704 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=668990.0, ans=0.125 2023-05-11 13:59:13,069 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0008-134307-0_sp0.9 from training. Duration: 24.6555625 2023-05-11 13:59:30,011 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.981e+02 3.534e+02 4.040e+02 4.929e+02 8.181e+02, threshold=8.081e+02, percent-clipped=1.0 2023-05-11 13:59:31,845 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=669090.0, ans=0.125 2023-05-11 13:59:53,712 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.out_combiner.scale_min, batch_count=669140.0, ans=0.2 2023-05-11 13:59:56,950 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=3.29 vs. limit=15.0 2023-05-11 14:00:08,526 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-65654-0031-41259-0 from training. Duration: 20.44 2023-05-11 14:00:17,757 INFO [train.py:1021] (0/2) Epoch 37, batch 2950, loss[loss=0.1461, simple_loss=0.2292, pruned_loss=0.03149, over 36939.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2552, pruned_loss=0.03995, over 7171515.51 frames. ], batch size: 86, lr: 2.94e-03, grad_scale: 16.0 2023-05-11 14:00:23,537 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0018-132285-0_sp0.9 from training. Duration: 23.45 2023-05-11 14:00:32,847 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=3.73 vs. limit=10.0 2023-05-11 14:00:50,703 WARNING [train.py:1182] (0/2) Exclude cut with ID 6945-60535-0076-12784-0_sp0.9 from training. Duration: 20.52225 2023-05-11 14:00:58,544 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0008-134307-0 from training. Duration: 22.19 2023-05-11 14:01:09,226 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0014-15845-0_sp1.1 from training. Duration: 25.3818125 2023-05-11 14:01:12,373 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer_ff3.min_abs, batch_count=669390.0, ans=0.2 2023-05-11 14:01:28,073 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0043-132310-0_sp0.9 from training. Duration: 28.0944375 2023-05-11 14:01:31,037 INFO [train.py:1021] (0/2) Epoch 37, batch 3000, loss[loss=0.1545, simple_loss=0.2423, pruned_loss=0.03338, over 36946.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2561, pruned_loss=0.04022, over 7154541.37 frames. ], batch size: 95, lr: 2.94e-03, grad_scale: 16.0 2023-05-11 14:01:31,038 INFO [train.py:1048] (0/2) Computing validation loss 2023-05-11 14:01:44,903 INFO [train.py:1057] (0/2) Epoch 37, validation: loss=0.1518, simple_loss=0.252, pruned_loss=0.02582, over 944034.00 frames. 2023-05-11 14:01:44,904 INFO [train.py:1058] (0/2) Maximum memory allocated so far is 18788MB 2023-05-11 14:01:48,010 WARNING [train.py:1182] (0/2) Exclude cut with ID 2195-150901-0045-59933-0_sp0.9 from training. Duration: 22.9444375 2023-05-11 14:01:57,487 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0024-13011-0_sp1.1 from training. Duration: 21.6318125 2023-05-11 14:02:11,778 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.797e+02 3.421e+02 3.859e+02 4.382e+02 6.331e+02, threshold=7.717e+02, percent-clipped=0.0 2023-05-11 14:02:16,281 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0030-130156-0 from training. Duration: 23.695 2023-05-11 14:02:33,028 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=10.48 vs. limit=15.0 2023-05-11 14:02:41,269 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0094-26379-0 from training. Duration: 23.955 2023-05-11 14:02:59,062 INFO [train.py:1021] (0/2) Epoch 37, batch 3050, loss[loss=0.1639, simple_loss=0.2554, pruned_loss=0.03615, over 36906.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2557, pruned_loss=0.04002, over 7172129.04 frames. ], batch size: 105, lr: 2.94e-03, grad_scale: 16.0 2023-05-11 14:03:00,849 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=669740.0, ans=0.0 2023-05-11 14:03:08,084 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module2.balancer1.prob, batch_count=669740.0, ans=0.125 2023-05-11 14:03:16,656 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0024-13011-0_sp0.9 from training. Duration: 26.438875 2023-05-11 14:03:57,584 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([4.3504, 3.8956, 3.7402, 4.0228], device='cuda:0') 2023-05-11 14:04:01,586 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0021-26306-0_sp0.9 from training. Duration: 21.2444375 2023-05-11 14:04:03,116 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0014-15845-0_sp0.9 from training. Duration: 31.02225 2023-05-11 14:04:13,047 INFO [train.py:1021] (0/2) Epoch 37, batch 3100, loss[loss=0.1633, simple_loss=0.2496, pruned_loss=0.03853, over 37054.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2557, pruned_loss=0.03995, over 7176763.00 frames. ], batch size: 94, lr: 2.94e-03, grad_scale: 16.0 2023-05-11 14:04:14,687 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0017-62487-0 from training. Duration: 22.395 2023-05-11 14:04:31,062 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0045-15876-0 from training. Duration: 21.075 2023-05-11 14:04:37,202 WARNING [train.py:1182] (0/2) Exclude cut with ID 6482-98857-0025-147532-0_sp0.9 from training. Duration: 20.0055625 2023-05-11 14:04:37,210 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0037-132304-0_sp0.9 from training. Duration: 22.05 2023-05-11 14:04:37,225 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0003-134302-0 from training. Duration: 26.8349375 2023-05-11 14:04:40,042 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.876e+02 3.511e+02 3.886e+02 4.736e+02 8.566e+02, threshold=7.773e+02, percent-clipped=1.0 2023-05-11 14:04:40,136 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0009-15840-0_sp1.1 from training. Duration: 22.1090625 2023-05-11 14:04:47,265 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0094-26379-0_sp0.9 from training. Duration: 26.6166875 2023-05-11 14:04:55,143 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=256, metric=7.96 vs. limit=22.5 2023-05-11 14:04:57,808 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.balancer.max_positive, batch_count=670140.0, ans=0.95 2023-05-11 14:05:00,868 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=4.25 vs. limit=15.0 2023-05-11 14:05:06,191 WARNING [train.py:1182] (0/2) Exclude cut with ID 2046-178027-0000-53705-0_sp0.9 from training. Duration: 20.3055625 2023-05-11 14:05:25,946 WARNING [train.py:1182] (0/2) Exclude cut with ID 7205-50138-0008-5373-0_sp0.9 from training. Duration: 20.7 2023-05-11 14:05:27,293 INFO [train.py:1021] (0/2) Epoch 37, batch 3150, loss[loss=0.1785, simple_loss=0.2682, pruned_loss=0.04444, over 32403.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2557, pruned_loss=0.03986, over 7190911.12 frames. ], batch size: 170, lr: 2.94e-03, grad_scale: 16.0 2023-05-11 14:05:28,996 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=670240.0, ans=0.125 2023-05-11 14:05:55,439 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.bypass_mid.scale_min, batch_count=670340.0, ans=0.2 2023-05-11 14:06:08,250 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0019-146985-0 from training. Duration: 22.48 2023-05-11 14:06:22,434 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff2_skip_rate, batch_count=670390.0, ans=0.0 2023-05-11 14:06:25,136 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0003-134302-0_sp0.9 from training. Duration: 29.816625 2023-05-11 14:06:35,518 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_skip_rate, batch_count=670440.0, ans=0.0 2023-05-11 14:06:41,054 INFO [train.py:1021] (0/2) Epoch 37, batch 3200, loss[loss=0.1386, simple_loss=0.2207, pruned_loss=0.02822, over 36830.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2558, pruned_loss=0.04008, over 7171402.75 frames. ], batch size: 84, lr: 2.94e-03, grad_scale: 32.0 2023-05-11 14:06:44,058 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0005-134304-0_sp1.1 from training. Duration: 22.7590625 2023-05-11 14:06:47,236 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 14:06:50,410 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0026-15857-0 from training. Duration: 22.555 2023-05-11 14:07:08,030 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 3.022e+02 3.445e+02 3.776e+02 4.300e+02 7.539e+02, threshold=7.551e+02, percent-clipped=0.0 2023-05-11 14:07:10,968 WARNING [train.py:1182] (0/2) Exclude cut with ID 1250-135782-0005-25975-0_sp0.9 from training. Duration: 21.688875 2023-05-11 14:07:12,807 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass_mid.scale_min, batch_count=670590.0, ans=0.2 2023-05-11 14:07:15,683 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer2.prob, batch_count=670590.0, ans=0.125 2023-05-11 14:07:34,531 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=256, metric=3.81 vs. limit=15.0 2023-05-11 14:07:46,210 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0038-41224-0_sp0.9 from training. Duration: 22.6 2023-05-11 14:07:47,724 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=670690.0, ans=0.125 2023-05-11 14:07:54,581 INFO [train.py:1021] (0/2) Epoch 37, batch 3250, loss[loss=0.1569, simple_loss=0.2462, pruned_loss=0.03377, over 36862.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2561, pruned_loss=0.04003, over 7169440.86 frames. ], batch size: 96, lr: 2.94e-03, grad_scale: 32.0 2023-05-11 14:08:22,942 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0009-15840-0 from training. Duration: 24.32 2023-05-11 14:08:56,204 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=670940.0, ans=0.0 2023-05-11 14:09:09,047 INFO [train.py:1021] (0/2) Epoch 37, batch 3300, loss[loss=0.1669, simple_loss=0.2573, pruned_loss=0.03822, over 36751.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2558, pruned_loss=0.03981, over 7162525.46 frames. ], batch size: 118, lr: 2.94e-03, grad_scale: 32.0 2023-05-11 14:09:23,230 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-276745-0093-13116-0_sp0.9 from training. Duration: 21.061125 2023-05-11 14:09:28,014 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=4.70 vs. limit=15.0 2023-05-11 14:09:29,156 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.3782, 3.6353, 4.0706, 3.7777], device='cuda:0') 2023-05-11 14:09:30,761 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=5.17 vs. limit=15.0 2023-05-11 14:09:35,637 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.661e+02 3.525e+02 3.930e+02 4.488e+02 7.055e+02, threshold=7.859e+02, percent-clipped=0.0 2023-05-11 14:09:37,236 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0024-15855-0_sp0.9 from training. Duration: 20.32225 2023-05-11 14:09:45,651 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=8.59 vs. limit=15.0 2023-05-11 14:09:52,118 WARNING [train.py:1182] (0/2) Exclude cut with ID 3033-130750-0096-55598-0_sp1.1 from training. Duration: 0.7545625 2023-05-11 14:09:55,198 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=671140.0, ans=0.125 2023-05-11 14:10:06,569 WARNING [train.py:1182] (0/2) Exclude cut with ID 4295-39940-0007-92567-0_sp0.9 from training. Duration: 23.9333125 2023-05-11 14:10:22,684 INFO [train.py:1021] (0/2) Epoch 37, batch 3350, loss[loss=0.168, simple_loss=0.2587, pruned_loss=0.0387, over 37021.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2555, pruned_loss=0.03959, over 7174108.53 frames. ], batch size: 99, lr: 2.94e-03, grad_scale: 32.0 2023-05-11 14:10:36,821 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([3.0194, 3.3238, 3.8037, 3.6772], device='cuda:0') 2023-05-11 14:10:40,821 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0008-134307-0_sp1.1 from training. Duration: 20.17275 2023-05-11 14:10:46,721 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0019-146985-0_sp1.1 from training. Duration: 20.436375 2023-05-11 14:10:54,012 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.hidden_balancer.prob, batch_count=671340.0, ans=0.125 2023-05-11 14:11:34,204 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.0.self_attn_weights, attn_weights_entropy = tensor([6.4322, 5.7405, 5.5419, 6.1832], device='cuda:0') 2023-05-11 14:11:34,328 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.1261, 4.2055, 4.6640, 4.7348], device='cuda:0') 2023-05-11 14:11:36,855 INFO [train.py:1021] (0/2) Epoch 37, batch 3400, loss[loss=0.1602, simple_loss=0.2433, pruned_loss=0.03856, over 37032.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2559, pruned_loss=0.03963, over 7195854.73 frames. ], batch size: 99, lr: 2.94e-03, grad_scale: 32.0 2023-05-11 14:11:57,332 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=12.77 vs. limit=22.5 2023-05-11 14:11:59,614 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.ff2_skip_rate, batch_count=671540.0, ans=0.0 2023-05-11 14:12:01,002 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer1.prob, batch_count=671540.0, ans=0.125 2023-05-11 14:12:03,647 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.864e+02 3.466e+02 3.718e+02 4.416e+02 1.113e+03, threshold=7.436e+02, percent-clipped=1.0 2023-05-11 14:12:09,467 WARNING [train.py:1182] (0/2) Exclude cut with ID 4234-40345-0022-142709-0_sp0.9 from training. Duration: 23.1055625 2023-05-11 14:12:10,869 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0007-12994-0_sp1.1 from training. Duration: 23.5 2023-05-11 14:12:15,593 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.ff2_skip_rate, batch_count=671590.0, ans=0.0 2023-05-11 14:12:21,685 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0009-134308-0_sp0.9 from training. Duration: 26.62775 2023-05-11 14:12:24,703 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer2.prob, batch_count=671640.0, ans=0.125 2023-05-11 14:12:31,650 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0018-132285-0 from training. Duration: 21.105 2023-05-11 14:12:35,980 WARNING [train.py:1182] (0/2) Exclude cut with ID 4511-76322-0006-80011-0_sp0.9 from training. Duration: 24.411125 2023-05-11 14:12:41,836 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=671690.0, ans=0.1 2023-05-11 14:12:46,875 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([1.7067, 2.8299, 4.4048, 2.8897], device='cuda:0') 2023-05-11 14:12:50,765 INFO [train.py:1021] (0/2) Epoch 37, batch 3450, loss[loss=0.1675, simple_loss=0.2617, pruned_loss=0.0366, over 34463.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2559, pruned_loss=0.03985, over 7159319.59 frames. ], batch size: 145, lr: 2.94e-03, grad_scale: 32.0 2023-05-11 14:13:04,993 WARNING [train.py:1182] (0/2) Exclude cut with ID 6758-72288-0033-108368-0_sp1.1 from training. Duration: 21.263625 2023-05-11 14:13:08,187 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.const_attention_rate, batch_count=671790.0, ans=0.025 2023-05-11 14:13:32,109 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=671840.0, ans=0.1 2023-05-11 14:13:34,358 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.balancer1.prob, batch_count=671890.0, ans=0.125 2023-05-11 14:13:39,692 WARNING [train.py:1182] (0/2) Exclude cut with ID 4234-40345-0022-142709-0 from training. Duration: 20.795 2023-05-11 14:13:39,983 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass_mid.scale_min, batch_count=671890.0, ans=0.2 2023-05-11 14:13:51,462 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0021-15852-0 from training. Duration: 24.76 2023-05-11 14:13:51,474 WARNING [train.py:1182] (0/2) Exclude cut with ID 3867-173237-0077-144769-0_sp0.9 from training. Duration: 22.25 2023-05-11 14:13:53,229 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.9359, 3.4717, 3.2472, 4.1375, 2.6277, 3.5747, 4.1400, 3.5202], device='cuda:0') 2023-05-11 14:14:04,606 INFO [train.py:1021] (0/2) Epoch 37, batch 3500, loss[loss=0.1668, simple_loss=0.2591, pruned_loss=0.03727, over 36912.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2553, pruned_loss=0.03967, over 7187561.52 frames. ], batch size: 105, lr: 2.94e-03, grad_scale: 32.0 2023-05-11 14:14:14,892 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0026-15857-0_sp1.1 from training. Duration: 20.5045625 2023-05-11 14:14:31,147 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.771e+02 3.526e+02 3.819e+02 4.232e+02 5.612e+02, threshold=7.638e+02, percent-clipped=0.0 2023-05-11 14:14:33,011 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 14:14:34,655 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=8.77 vs. limit=22.5 2023-05-11 14:15:03,095 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([1.8571, 3.1012, 4.5760, 3.1373], device='cuda:0') 2023-05-11 14:15:08,717 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=672190.0, ans=0.1 2023-05-11 14:15:16,947 INFO [train.py:1021] (0/2) Epoch 37, batch 3550, loss[loss=0.1879, simple_loss=0.2801, pruned_loss=0.04789, over 37063.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2557, pruned_loss=0.04012, over 7148938.70 frames. ], batch size: 116, lr: 2.94e-03, grad_scale: 32.0 2023-05-11 14:15:24,809 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=3.60 vs. limit=10.0 2023-05-11 14:15:44,675 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 14:15:57,480 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.5230, 3.6197, 3.9858, 3.6034], device='cuda:0') 2023-05-11 14:16:17,247 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=672440.0, ans=0.125 2023-05-11 14:16:19,106 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=5.11 vs. limit=15.0 2023-05-11 14:16:28,429 INFO [train.py:1021] (0/2) Epoch 37, batch 3600, loss[loss=0.1725, simple_loss=0.2634, pruned_loss=0.04077, over 36756.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2553, pruned_loss=0.04003, over 7107229.85 frames. ], batch size: 118, lr: 2.94e-03, grad_scale: 32.0 2023-05-11 14:16:48,415 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=672540.0, ans=0.0 2023-05-11 14:16:54,001 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.1645, 2.4070, 3.4283, 2.6942], device='cuda:0') 2023-05-11 14:16:55,033 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.899e+02 3.413e+02 3.681e+02 4.016e+02 7.520e+02, threshold=7.362e+02, percent-clipped=0.0 2023-05-11 14:17:14,025 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 14:17:17,514 INFO [checkpoint.py:75] (0/2) Saving checkpoint to pruned_transducer_stateless7/exp1119-smaller-md1500/epoch-37.pt 2023-05-11 14:17:30,207 WARNING [train.py:1182] (0/2) Exclude cut with ID 7859-102521-0017-7548-0_sp1.1 from training. Duration: 22.2954375 2023-05-11 14:17:34,965 INFO [train.py:1021] (0/2) Epoch 38, batch 0, loss[loss=0.1616, simple_loss=0.2535, pruned_loss=0.03481, over 36939.00 frames. ], tot_loss[loss=0.1616, simple_loss=0.2535, pruned_loss=0.03481, over 36939.00 frames. ], batch size: 95, lr: 2.90e-03, grad_scale: 32.0 2023-05-11 14:17:34,966 INFO [train.py:1048] (0/2) Computing validation loss 2023-05-11 14:17:47,903 INFO [train.py:1057] (0/2) Epoch 38, validation: loss=0.1515, simple_loss=0.2521, pruned_loss=0.02542, over 944034.00 frames. 2023-05-11 14:17:47,903 INFO [train.py:1058] (0/2) Maximum memory allocated so far is 18788MB 2023-05-11 14:17:57,103 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass.skip_rate, batch_count=672670.0, ans=0.07 2023-05-11 14:18:07,160 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.attention_skip_rate, batch_count=672720.0, ans=0.0 2023-05-11 14:18:08,876 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 14:18:35,795 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.out_combiner.scale_min, batch_count=672820.0, ans=0.2 2023-05-11 14:18:41,092 WARNING [train.py:1182] (0/2) Exclude cut with ID 298-126791-0067-24026-0_sp0.9 from training. Duration: 21.438875 2023-05-11 14:18:45,452 WARNING [train.py:1182] (0/2) Exclude cut with ID 5652-39938-0025-23684-0_sp0.9 from training. Duration: 22.2055625 2023-05-11 14:19:01,213 INFO [train.py:1021] (0/2) Epoch 38, batch 50, loss[loss=0.156, simple_loss=0.2465, pruned_loss=0.03279, over 36967.00 frames. ], tot_loss[loss=0.1618, simple_loss=0.2544, pruned_loss=0.0346, over 1607415.49 frames. ], batch size: 95, lr: 2.90e-03, grad_scale: 32.0 2023-05-11 14:19:05,808 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=672920.0, ans=0.1 2023-05-11 14:19:49,864 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.419e+02 3.103e+02 3.409e+02 4.431e+02 5.896e+02, threshold=6.818e+02, percent-clipped=0.0 2023-05-11 14:19:53,154 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=673070.0, ans=0.125 2023-05-11 14:19:54,667 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([6.2147, 5.3961, 5.5632, 6.0574], device='cuda:0') 2023-05-11 14:20:15,153 INFO [train.py:1021] (0/2) Epoch 38, batch 100, loss[loss=0.1604, simple_loss=0.257, pruned_loss=0.03195, over 36904.00 frames. ], tot_loss[loss=0.1604, simple_loss=0.2531, pruned_loss=0.03383, over 2861680.47 frames. ], batch size: 105, lr: 2.89e-03, grad_scale: 32.0 2023-05-11 14:20:15,452 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass_mid.scale_min, batch_count=673170.0, ans=0.2 2023-05-11 14:20:21,641 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_skip_rate, batch_count=673170.0, ans=0.0 2023-05-11 14:20:34,565 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.balancer2.prob, batch_count=673220.0, ans=0.125 2023-05-11 14:20:39,006 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass_mid.scale_min, batch_count=673220.0, ans=0.2 2023-05-11 14:20:44,866 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.2.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 14:21:04,557 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff3_skip_rate, batch_count=673320.0, ans=0.0 2023-05-11 14:21:06,193 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=10.79 vs. limit=15.0 2023-05-11 14:21:07,501 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([6.1919, 5.3469, 5.4974, 6.0368], device='cuda:0') 2023-05-11 14:21:14,050 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_skip_rate, batch_count=673370.0, ans=0.0 2023-05-11 14:21:21,541 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=7.83 vs. limit=22.5 2023-05-11 14:21:29,370 INFO [train.py:1021] (0/2) Epoch 38, batch 150, loss[loss=0.1645, simple_loss=0.261, pruned_loss=0.034, over 36758.00 frames. ], tot_loss[loss=0.1604, simple_loss=0.2527, pruned_loss=0.034, over 3804282.56 frames. ], batch size: 122, lr: 2.89e-03, grad_scale: 32.0 2023-05-11 14:21:48,561 WARNING [train.py:1182] (0/2) Exclude cut with ID 7859-102521-0017-7548-0 from training. Duration: 24.525 2023-05-11 14:21:51,686 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module2.balancer1.prob, batch_count=673470.0, ans=0.125 2023-05-11 14:22:18,703 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.328e+02 2.911e+02 3.365e+02 3.823e+02 6.298e+02, threshold=6.731e+02, percent-clipped=0.0 2023-05-11 14:22:20,442 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.bypass_mid.scale_min, batch_count=673570.0, ans=0.2 2023-05-11 14:22:24,597 WARNING [train.py:1182] (0/2) Exclude cut with ID 3699-47246-0007-3408-0_sp0.9 from training. Duration: 20.26675 2023-05-11 14:22:39,137 WARNING [train.py:1182] (0/2) Exclude cut with ID 7859-102521-0017-7548-0_sp0.9 from training. Duration: 27.25 2023-05-11 14:22:39,418 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=673620.0, ans=0.1 2023-05-11 14:22:43,404 INFO [train.py:1021] (0/2) Epoch 38, batch 200, loss[loss=0.1484, simple_loss=0.2348, pruned_loss=0.03102, over 37024.00 frames. ], tot_loss[loss=0.1599, simple_loss=0.2523, pruned_loss=0.03373, over 4583166.08 frames. ], batch size: 88, lr: 2.89e-03, grad_scale: 32.0 2023-05-11 14:23:09,349 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=673720.0, ans=0.0 2023-05-11 14:23:38,179 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_skip_rate, batch_count=673820.0, ans=0.0 2023-05-11 14:23:56,546 WARNING [train.py:1182] (0/2) Exclude cut with ID 6426-64292-0017-15984-0 from training. Duration: 21.68 2023-05-11 14:23:57,935 INFO [train.py:1021] (0/2) Epoch 38, batch 250, loss[loss=0.1516, simple_loss=0.2517, pruned_loss=0.02577, over 37188.00 frames. ], tot_loss[loss=0.1594, simple_loss=0.2517, pruned_loss=0.03348, over 5173823.05 frames. ], batch size: 102, lr: 2.89e-03, grad_scale: 32.0 2023-05-11 14:24:09,598 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0007-59342-0 from training. Duration: 21.6300625 2023-05-11 14:24:20,291 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.1767, 2.6156, 3.4594, 2.6860], device='cuda:0') 2023-05-11 14:24:31,601 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0007-59342-0_sp0.9 from training. Duration: 24.033375 2023-05-11 14:24:33,241 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_skip_rate, batch_count=674020.0, ans=0.0 2023-05-11 14:24:34,808 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.7897, 3.6026, 3.3149, 4.2863, 2.7445, 3.6236, 4.3203, 3.6976], device='cuda:0') 2023-05-11 14:24:47,848 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.390e+02 3.106e+02 3.813e+02 4.596e+02 7.208e+02, threshold=7.627e+02, percent-clipped=2.0 2023-05-11 14:25:06,023 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer2.prob, batch_count=674120.0, ans=0.125 2023-05-11 14:25:11,690 INFO [train.py:1021] (0/2) Epoch 38, batch 300, loss[loss=0.1684, simple_loss=0.263, pruned_loss=0.0369, over 36765.00 frames. ], tot_loss[loss=0.1593, simple_loss=0.2519, pruned_loss=0.03333, over 5637433.18 frames. ], batch size: 122, lr: 2.89e-03, grad_scale: 16.0 2023-05-11 14:25:33,360 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0009-59344-0 from training. Duration: 22.905 2023-05-11 14:25:35,392 WARNING [train.py:1182] (0/2) Exclude cut with ID 5622-44585-0006-90525-0_sp1.1 from training. Duration: 23.4318125 2023-05-11 14:25:41,416 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.balancer2.prob, batch_count=674270.0, ans=0.125 2023-05-11 14:26:12,720 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=674370.0, ans=0.125 2023-05-11 14:26:25,626 INFO [train.py:1021] (0/2) Epoch 38, batch 350, loss[loss=0.1509, simple_loss=0.245, pruned_loss=0.02841, over 36865.00 frames. ], tot_loss[loss=0.1588, simple_loss=0.2513, pruned_loss=0.03309, over 6010916.35 frames. ], batch size: 96, lr: 2.89e-03, grad_scale: 8.0 2023-05-11 14:26:26,023 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.1543, 4.3709, 4.7121, 4.7845], device='cuda:0') 2023-05-11 14:26:36,695 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=674420.0, ans=0.0 2023-05-11 14:26:57,608 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer1.max_abs, batch_count=674520.0, ans=10.0 2023-05-11 14:27:17,626 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.510e+02 3.097e+02 3.477e+02 4.144e+02 8.336e+02, threshold=6.954e+02, percent-clipped=1.0 2023-05-11 14:27:21,015 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.1035, 3.9071, 3.6446, 3.9242, 3.3515, 2.9787, 3.3939, 2.9476], device='cuda:0') 2023-05-11 14:27:37,541 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0009-59344-0_sp1.1 from training. Duration: 20.82275 2023-05-11 14:27:38,972 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0009-59344-0_sp0.9 from training. Duration: 25.45 2023-05-11 14:27:40,397 INFO [train.py:1021] (0/2) Epoch 38, batch 400, loss[loss=0.1661, simple_loss=0.2636, pruned_loss=0.0343, over 36836.00 frames. ], tot_loss[loss=0.1597, simple_loss=0.2523, pruned_loss=0.03355, over 6303014.27 frames. ], batch size: 111, lr: 2.89e-03, grad_scale: 16.0 2023-05-11 14:28:29,124 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.0.self_attn_weights, attn_weights_entropy = tensor([6.3583, 5.6570, 5.5366, 6.1006], device='cuda:0') 2023-05-11 14:28:39,113 WARNING [train.py:1182] (0/2) Exclude cut with ID 5622-44585-0006-90525-0 from training. Duration: 25.775 2023-05-11 14:28:46,867 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.3894, 3.9879, 2.2655, 2.4677], device='cuda:0') 2023-05-11 14:28:52,779 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=256, metric=13.45 vs. limit=22.5 2023-05-11 14:28:53,427 INFO [train.py:1021] (0/2) Epoch 38, batch 450, loss[loss=0.1951, simple_loss=0.2826, pruned_loss=0.05376, over 24918.00 frames. ], tot_loss[loss=0.161, simple_loss=0.254, pruned_loss=0.03404, over 6459159.46 frames. ], batch size: 233, lr: 2.89e-03, grad_scale: 16.0 2023-05-11 14:28:59,195 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0071-62375-0_sp0.9 from training. Duration: 22.25 2023-05-11 14:29:23,860 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer2.prob, batch_count=675020.0, ans=0.125 2023-05-11 14:29:27,961 WARNING [train.py:1182] (0/2) Exclude cut with ID 3972-170212-0014-23379-0 from training. Duration: 26.205 2023-05-11 14:29:37,030 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_skip_rate, batch_count=675070.0, ans=0.0 2023-05-11 14:29:45,393 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.472e+02 3.088e+02 3.432e+02 4.025e+02 7.085e+02, threshold=6.865e+02, percent-clipped=1.0 2023-05-11 14:29:45,524 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0047-9341-0_sp0.9 from training. Duration: 30.1555625 2023-05-11 14:29:50,058 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_skip_rate, batch_count=675070.0, ans=0.0 2023-05-11 14:29:51,321 WARNING [train.py:1182] (0/2) Exclude cut with ID 1265-135635-0050-6781-0_sp0.9 from training. Duration: 21.8333125 2023-05-11 14:29:53,462 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module1.whiten.whitening_limit, batch_count=675120.0, ans=15.0 2023-05-11 14:29:59,997 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0024-94914-0_sp1.1 from training. Duration: 20.6545625 2023-05-11 14:30:07,904 INFO [train.py:1021] (0/2) Epoch 38, batch 500, loss[loss=0.2026, simple_loss=0.2865, pruned_loss=0.05928, over 24325.00 frames. ], tot_loss[loss=0.1614, simple_loss=0.2547, pruned_loss=0.03403, over 6636257.15 frames. ], batch size: 233, lr: 2.89e-03, grad_scale: 16.0 2023-05-11 14:30:27,504 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.out_combiner.scale_min, batch_count=675220.0, ans=0.2 2023-05-11 14:30:39,025 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder_embed.conv.8.prob, batch_count=675270.0, ans=0.125 2023-05-11 14:30:39,172 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=675270.0, ans=0.1 2023-05-11 14:30:40,430 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0045-39920-0_sp0.9 from training. Duration: 20.52225 2023-05-11 14:31:01,767 WARNING [train.py:1182] (0/2) Exclude cut with ID 3972-170212-0014-23379-0_sp0.9 from training. Duration: 29.1166875 2023-05-11 14:31:02,080 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.ff2_skip_rate, batch_count=675320.0, ans=0.0 2023-05-11 14:31:19,912 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module2.balancer1.prob, batch_count=675370.0, ans=0.125 2023-05-11 14:31:22,373 INFO [train.py:1021] (0/2) Epoch 38, batch 550, loss[loss=0.1802, simple_loss=0.2734, pruned_loss=0.04351, over 36362.00 frames. ], tot_loss[loss=0.1623, simple_loss=0.2557, pruned_loss=0.03449, over 6745932.30 frames. ], batch size: 126, lr: 2.89e-03, grad_scale: 16.0 2023-05-11 14:31:25,646 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=675420.0, ans=0.1 2023-05-11 14:31:27,022 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=675420.0, ans=0.125 2023-05-11 14:31:29,972 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.attention_skip_rate, batch_count=675420.0, ans=0.0 2023-05-11 14:31:37,570 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=675470.0, ans=0.1 2023-05-11 14:31:37,607 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=675470.0, ans=0.125 2023-05-11 14:31:40,725 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=6.61 vs. limit=15.0 2023-05-11 14:31:48,278 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass.scale_min, batch_count=675470.0, ans=0.2 2023-05-11 14:31:49,769 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([1.7929, 3.0096, 4.5376, 3.1257], device='cuda:0') 2023-05-11 14:32:04,604 WARNING [train.py:1182] (0/2) Exclude cut with ID 543-133211-0007-59831-0_sp0.9 from training. Duration: 21.388875 2023-05-11 14:32:14,928 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.669e+02 3.101e+02 3.523e+02 4.214e+02 6.918e+02, threshold=7.046e+02, percent-clipped=1.0 2023-05-11 14:32:18,315 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=675570.0, ans=0.1 2023-05-11 14:32:36,922 INFO [train.py:1021] (0/2) Epoch 38, batch 600, loss[loss=0.2016, simple_loss=0.2841, pruned_loss=0.0596, over 24813.00 frames. ], tot_loss[loss=0.163, simple_loss=0.2564, pruned_loss=0.03481, over 6828895.22 frames. ], batch size: 234, lr: 2.89e-03, grad_scale: 16.0 2023-05-11 14:32:40,428 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0024-94914-0 from training. Duration: 22.72 2023-05-11 14:32:41,685 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0031-94921-0_sp0.9 from training. Duration: 22.7444375 2023-05-11 14:33:09,625 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_skip_rate, batch_count=675770.0, ans=0.0 2023-05-11 14:33:25,406 WARNING [train.py:1182] (0/2) Exclude cut with ID 4133-6541-0027-40495-0_sp1.1 from training. Duration: 0.9681875 2023-05-11 14:33:28,309 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62851-0022-91297-0_sp0.9 from training. Duration: 22.3166875 2023-05-11 14:33:31,445 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module2.balancer1.prob, batch_count=675820.0, ans=0.125 2023-05-11 14:33:34,732 WARNING [train.py:1182] (0/2) Exclude cut with ID 543-133212-0015-59917-0_sp0.9 from training. Duration: 21.8166875 2023-05-11 14:33:47,161 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=675870.0, ans=0.0 2023-05-11 14:33:51,399 INFO [train.py:1021] (0/2) Epoch 38, batch 650, loss[loss=0.1675, simple_loss=0.2686, pruned_loss=0.03321, over 37010.00 frames. ], tot_loss[loss=0.1632, simple_loss=0.2567, pruned_loss=0.03488, over 6920034.65 frames. ], batch size: 104, lr: 2.89e-03, grad_scale: 16.0 2023-05-11 14:33:57,485 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.6949, 5.5053, 4.7922, 5.2791], device='cuda:0') 2023-05-11 14:34:10,359 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=675970.0, ans=0.0 2023-05-11 14:34:19,146 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.attention_skip_rate, batch_count=676020.0, ans=0.0 2023-05-11 14:34:40,721 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff3_skip_rate, batch_count=676070.0, ans=0.0 2023-05-11 14:34:43,286 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.352e+02 3.150e+02 3.575e+02 4.679e+02 7.783e+02, threshold=7.149e+02, percent-clipped=4.0 2023-05-11 14:34:49,399 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer2.prob, batch_count=676120.0, ans=0.125 2023-05-11 14:35:04,939 INFO [train.py:1021] (0/2) Epoch 38, batch 700, loss[loss=0.1646, simple_loss=0.266, pruned_loss=0.03163, over 37082.00 frames. ], tot_loss[loss=0.1628, simple_loss=0.2561, pruned_loss=0.03475, over 6963697.39 frames. ], batch size: 103, lr: 2.89e-03, grad_scale: 16.0 2023-05-11 14:35:17,987 WARNING [train.py:1182] (0/2) Exclude cut with ID 4957-30119-0041-23990-0_sp0.9 from training. Duration: 20.22775 2023-05-11 14:35:36,640 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass_mid.scale_min, batch_count=676270.0, ans=0.2 2023-05-11 14:35:36,655 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=676270.0, ans=0.1 2023-05-11 14:35:39,771 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_skip_rate, batch_count=676270.0, ans=0.0 2023-05-11 14:35:55,592 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module1.balancer1.max_abs, batch_count=676320.0, ans=10.0 2023-05-11 14:36:05,328 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0047-9341-0_sp1.1 from training. Duration: 24.67275 2023-05-11 14:36:08,505 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.const_attention_rate, batch_count=676370.0, ans=0.025 2023-05-11 14:36:13,399 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer1.prob, batch_count=676370.0, ans=0.125 2023-05-11 14:36:13,529 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_skip_rate, batch_count=676370.0, ans=0.0 2023-05-11 14:36:19,350 INFO [train.py:1021] (0/2) Epoch 38, batch 750, loss[loss=0.1815, simple_loss=0.2766, pruned_loss=0.04318, over 36720.00 frames. ], tot_loss[loss=0.1631, simple_loss=0.2565, pruned_loss=0.03479, over 7040408.13 frames. ], batch size: 122, lr: 2.89e-03, grad_scale: 16.0 2023-05-11 14:36:28,757 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=256, metric=8.54 vs. limit=22.5 2023-05-11 14:36:33,891 WARNING [train.py:1182] (0/2) Exclude cut with ID 3082-165428-0081-50734-0_sp0.9 from training. Duration: 21.8055625 2023-05-11 14:36:41,627 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.const_attention_rate, batch_count=676470.0, ans=0.025 2023-05-11 14:37:03,048 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer2.prob, batch_count=676570.0, ans=0.125 2023-05-11 14:37:06,618 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass.scale_min, batch_count=676570.0, ans=0.2 2023-05-11 14:37:07,292 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.conv_module1.whiten, num_groups=1, num_channels=192, metric=12.32 vs. limit=15.0 2023-05-11 14:37:09,655 WARNING [train.py:1182] (0/2) Exclude cut with ID 3340-169293-0054-76830-0_sp0.9 from training. Duration: 22.6666875 2023-05-11 14:37:10,964 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.650e+02 3.233e+02 3.805e+02 4.537e+02 6.494e+02, threshold=7.609e+02, percent-clipped=0.0 2023-05-11 14:37:32,713 INFO [train.py:1021] (0/2) Epoch 38, batch 800, loss[loss=0.1517, simple_loss=0.2409, pruned_loss=0.03121, over 37069.00 frames. ], tot_loss[loss=0.1632, simple_loss=0.2567, pruned_loss=0.03482, over 7053188.62 frames. ], batch size: 94, lr: 2.89e-03, grad_scale: 32.0 2023-05-11 14:37:36,032 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=676670.0, ans=0.125 2023-05-11 14:37:51,754 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass_mid.scale_min, batch_count=676720.0, ans=0.2 2023-05-11 14:37:52,889 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder_embed.conv.8.prob, batch_count=676720.0, ans=0.125 2023-05-11 14:37:53,156 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.4043, 3.3174, 4.7748, 3.5338], device='cuda:0') 2023-05-11 14:38:11,172 WARNING [train.py:1182] (0/2) Exclude cut with ID 2411-132532-0017-82279-0_sp1.1 from training. Duration: 0.9681875 2023-05-11 14:38:15,872 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.ff3_skip_rate, batch_count=676820.0, ans=0.0 2023-05-11 14:38:27,932 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=5.34 vs. limit=15.0 2023-05-11 14:38:37,191 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62850-0007-91323-0 from training. Duration: 22.485 2023-05-11 14:38:44,638 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff3_skip_rate, batch_count=676920.0, ans=0.0 2023-05-11 14:38:45,796 INFO [train.py:1021] (0/2) Epoch 38, batch 850, loss[loss=0.152, simple_loss=0.2463, pruned_loss=0.0288, over 37009.00 frames. ], tot_loss[loss=0.1626, simple_loss=0.2561, pruned_loss=0.03452, over 7109953.55 frames. ], batch size: 99, lr: 2.89e-03, grad_scale: 32.0 2023-05-11 14:39:14,494 WARNING [train.py:1182] (0/2) Exclude cut with ID 3972-170212-0014-23379-0_sp1.1 from training. Duration: 23.82275 2023-05-11 14:39:16,153 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.skip_rate, batch_count=677020.0, ans=0.04949747468305833 2023-05-11 14:39:27,672 WARNING [train.py:1182] (0/2) Exclude cut with ID 4860-13185-0032-76709-0 from training. Duration: 20.77 2023-05-11 14:39:36,497 WARNING [train.py:1182] (0/2) Exclude cut with ID 6426-64292-0017-15984-0_sp0.9 from training. Duration: 24.088875 2023-05-11 14:39:37,741 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.520e+02 2.972e+02 3.416e+02 4.582e+02 7.599e+02, threshold=6.832e+02, percent-clipped=0.0 2023-05-11 14:39:59,529 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module2.balancer2.min_abs, batch_count=677170.0, ans=0.5 2023-05-11 14:40:00,772 INFO [train.py:1021] (0/2) Epoch 38, batch 900, loss[loss=0.1716, simple_loss=0.2679, pruned_loss=0.03762, over 37100.00 frames. ], tot_loss[loss=0.1628, simple_loss=0.2564, pruned_loss=0.03457, over 7138115.32 frames. ], batch size: 107, lr: 2.89e-03, grad_scale: 32.0 2023-05-11 14:40:02,393 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff3_skip_rate, batch_count=677170.0, ans=0.0 2023-05-11 14:40:09,431 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62850-0007-91323-0_sp1.1 from training. Duration: 20.4409375 2023-05-11 14:40:16,913 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=677220.0, ans=0.1 2023-05-11 14:41:02,251 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.conv_module1.whiten, num_groups=1, num_channels=192, metric=8.41 vs. limit=15.0 2023-05-11 14:41:07,636 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 14:41:14,437 INFO [train.py:1021] (0/2) Epoch 38, batch 950, loss[loss=0.1422, simple_loss=0.2325, pruned_loss=0.026, over 36812.00 frames. ], tot_loss[loss=0.162, simple_loss=0.2556, pruned_loss=0.03424, over 7166184.19 frames. ], batch size: 89, lr: 2.89e-03, grad_scale: 32.0 2023-05-11 14:41:21,969 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.5174, 4.8553, 5.0190, 4.6928], device='cuda:0') 2023-05-11 14:41:27,571 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0062-62366-0_sp0.9 from training. Duration: 22.511125 2023-05-11 14:41:27,587 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0031-39906-0 from training. Duration: 20.675 2023-05-11 14:41:29,211 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=677470.0, ans=0.125 2023-05-11 14:42:06,171 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.479e+02 3.315e+02 3.761e+02 4.748e+02 8.533e+02, threshold=7.522e+02, percent-clipped=4.0 2023-05-11 14:42:27,199 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.ff3_skip_rate, batch_count=677670.0, ans=0.0 2023-05-11 14:42:28,391 INFO [train.py:1021] (0/2) Epoch 38, batch 1000, loss[loss=0.1582, simple_loss=0.2542, pruned_loss=0.03116, over 37171.00 frames. ], tot_loss[loss=0.1619, simple_loss=0.2554, pruned_loss=0.0342, over 7163061.95 frames. ], batch size: 102, lr: 2.89e-03, grad_scale: 32.0 2023-05-11 14:42:58,960 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=6.28 vs. limit=15.0 2023-05-11 14:43:10,188 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=1.81 vs. limit=6.0 2023-05-11 14:43:10,927 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62850-0007-91323-0_sp0.9 from training. Duration: 24.9833125 2023-05-11 14:43:39,611 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0047-9341-0 from training. Duration: 27.14 2023-05-11 14:43:42,423 INFO [train.py:1021] (0/2) Epoch 38, batch 1050, loss[loss=0.1428, simple_loss=0.23, pruned_loss=0.02775, over 36809.00 frames. ], tot_loss[loss=0.1619, simple_loss=0.2554, pruned_loss=0.03415, over 7150228.47 frames. ], batch size: 84, lr: 2.88e-03, grad_scale: 32.0 2023-05-11 14:43:52,795 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer1.prob, batch_count=677920.0, ans=0.125 2023-05-11 14:43:57,093 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0044-62348-0 from training. Duration: 22.44 2023-05-11 14:44:34,348 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.551e+02 3.022e+02 3.304e+02 3.897e+02 6.766e+02, threshold=6.607e+02, percent-clipped=0.0 2023-05-11 14:44:46,297 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.attention_skip_rate, batch_count=678120.0, ans=0.0 2023-05-11 14:44:47,649 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=678120.0, ans=0.1 2023-05-11 14:44:56,112 INFO [train.py:1021] (0/2) Epoch 38, batch 1100, loss[loss=0.1761, simple_loss=0.2728, pruned_loss=0.03969, over 36755.00 frames. ], tot_loss[loss=0.1615, simple_loss=0.2548, pruned_loss=0.03411, over 7179272.76 frames. ], batch size: 118, lr: 2.88e-03, grad_scale: 32.0 2023-05-11 14:44:59,348 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=678170.0, ans=0.1 2023-05-11 14:45:04,265 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=678170.0, ans=0.1 2023-05-11 14:45:17,247 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=144, metric=5.30 vs. limit=10.0 2023-05-11 14:45:17,759 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0060-62364-0_sp0.9 from training. Duration: 21.361125 2023-05-11 14:45:22,234 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass_mid.scale_min, batch_count=678220.0, ans=0.2 2023-05-11 14:45:23,514 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0079-62383-0_sp1.1 from training. Duration: 27.0318125 2023-05-11 14:45:35,273 WARNING [train.py:1182] (0/2) Exclude cut with ID 5622-44585-0006-90525-0_sp0.9 from training. Duration: 28.638875 2023-05-11 14:45:36,848 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer2.min_abs, batch_count=678270.0, ans=0.5 2023-05-11 14:45:44,965 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.self_attn1.whiten, num_groups=1, num_channels=192, metric=10.63 vs. limit=22.5 2023-05-11 14:45:49,877 WARNING [train.py:1182] (0/2) Exclude cut with ID 3340-169293-0054-76830-0 from training. Duration: 20.4 2023-05-11 14:46:11,708 INFO [train.py:1021] (0/2) Epoch 38, batch 1150, loss[loss=0.1791, simple_loss=0.2749, pruned_loss=0.04166, over 36360.00 frames. ], tot_loss[loss=0.1616, simple_loss=0.2548, pruned_loss=0.03415, over 7176894.40 frames. ], batch size: 126, lr: 2.88e-03, grad_scale: 32.0 2023-05-11 14:46:22,036 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0071-62375-0 from training. Duration: 20.025 2023-05-11 14:46:23,394 WARNING [train.py:1182] (0/2) Exclude cut with ID 2364-131735-0112-64612-0_sp0.9 from training. Duration: 20.488875 2023-05-11 14:46:29,351 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0079-62383-0 from training. Duration: 29.735 2023-05-11 14:46:40,199 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=3.24 vs. limit=12.0 2023-05-11 14:47:03,961 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.610e+02 3.021e+02 3.591e+02 4.320e+02 6.325e+02, threshold=7.182e+02, percent-clipped=0.0 2023-05-11 14:47:11,772 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.balancer2.prob, batch_count=678620.0, ans=0.125 2023-05-11 14:47:16,786 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=192, metric=6.46 vs. limit=15.0 2023-05-11 14:47:17,579 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=678620.0, ans=0.1 2023-05-11 14:47:25,668 INFO [train.py:1021] (0/2) Epoch 38, batch 1200, loss[loss=0.1874, simple_loss=0.2823, pruned_loss=0.04625, over 34565.00 frames. ], tot_loss[loss=0.162, simple_loss=0.2552, pruned_loss=0.03438, over 7179078.82 frames. ], batch size: 144, lr: 2.88e-03, grad_scale: 32.0 2023-05-11 14:47:53,298 WARNING [train.py:1182] (0/2) Exclude cut with ID 7276-92427-0014-12983-0_sp0.9 from training. Duration: 21.3055625 2023-05-11 14:47:54,659 WARNING [train.py:1182] (0/2) Exclude cut with ID 1025-75365-0008-79168-0_sp0.9 from training. Duration: 22.0666875 2023-05-11 14:47:56,409 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.balancer2.prob, batch_count=678770.0, ans=0.125 2023-05-11 14:48:40,101 INFO [train.py:1021] (0/2) Epoch 38, batch 1250, loss[loss=0.1463, simple_loss=0.2372, pruned_loss=0.02777, over 37045.00 frames. ], tot_loss[loss=0.1622, simple_loss=0.2556, pruned_loss=0.03435, over 7204251.77 frames. ], batch size: 88, lr: 2.88e-03, grad_scale: 32.0 2023-05-11 14:48:51,580 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=678920.0, ans=0.125 2023-05-11 14:49:32,333 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.454e+02 3.076e+02 3.473e+02 4.044e+02 6.829e+02, threshold=6.945e+02, percent-clipped=0.0 2023-05-11 14:49:35,725 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.3580, 4.5686, 2.2836, 2.4254], device='cuda:0') 2023-05-11 14:49:37,135 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([4.9819, 4.2998, 3.3002, 3.1872], device='cuda:0') 2023-05-11 14:49:43,409 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.4798, 3.4563, 3.2597, 4.1238, 2.3587, 3.5536, 4.1697, 3.5906], device='cuda:0') 2023-05-11 14:49:47,559 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0062-62366-0 from training. Duration: 20.26 2023-05-11 14:49:54,512 INFO [train.py:1021] (0/2) Epoch 38, batch 1300, loss[loss=0.1763, simple_loss=0.2702, pruned_loss=0.04124, over 37016.00 frames. ], tot_loss[loss=0.1623, simple_loss=0.2558, pruned_loss=0.03438, over 7219587.43 frames. ], batch size: 116, lr: 2.88e-03, grad_scale: 16.0 2023-05-11 14:49:57,805 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.3025, 4.5784, 2.3001, 2.3723], device='cuda:0') 2023-05-11 14:50:01,728 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0030-9324-0_sp0.9 from training. Duration: 21.3444375 2023-05-11 14:50:09,847 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward3.out_whiten.whitening_limit, batch_count=679220.0, ans=15.0 2023-05-11 14:50:13,511 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.hidden_balancer.prob, batch_count=679220.0, ans=0.125 2023-05-11 14:50:15,348 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=3.78 vs. limit=15.0 2023-05-11 14:50:35,054 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=679270.0, ans=0.125 2023-05-11 14:50:39,450 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module1.balancer2.prob, batch_count=679320.0, ans=0.125 2023-05-11 14:50:59,778 WARNING [train.py:1182] (0/2) Exclude cut with ID 497-129325-0061-62254-0_sp1.1 from training. Duration: 0.97725 2023-05-11 14:51:01,526 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([4.6971, 3.9965, 2.5457, 2.8911], device='cuda:0') 2023-05-11 14:51:01,603 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.2831, 4.2725, 2.1450, 2.3591], device='cuda:0') 2023-05-11 14:51:08,447 INFO [train.py:1021] (0/2) Epoch 38, batch 1350, loss[loss=0.1565, simple_loss=0.2522, pruned_loss=0.03044, over 37093.00 frames. ], tot_loss[loss=0.1625, simple_loss=0.2561, pruned_loss=0.03444, over 7215749.67 frames. ], batch size: 103, lr: 2.88e-03, grad_scale: 16.0 2023-05-11 14:51:22,478 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.scale_min, batch_count=679470.0, ans=0.2 2023-05-11 14:51:27,728 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([4.8540, 4.1502, 2.8477, 2.9684], device='cuda:0') 2023-05-11 14:51:31,913 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.attention_skip_rate, batch_count=679470.0, ans=0.0 2023-05-11 14:51:31,937 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer1.prob, batch_count=679470.0, ans=0.125 2023-05-11 14:51:37,583 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0031-39906-0_sp0.9 from training. Duration: 22.97225 2023-05-11 14:51:52,336 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=679570.0, ans=0.0 2023-05-11 14:52:02,405 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.493e+02 2.932e+02 3.413e+02 4.013e+02 5.260e+02, threshold=6.826e+02, percent-clipped=0.0 2023-05-11 14:52:10,128 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0047-39922-0_sp0.9 from training. Duration: 21.97775 2023-05-11 14:52:23,525 INFO [train.py:1021] (0/2) Epoch 38, batch 1400, loss[loss=0.1419, simple_loss=0.229, pruned_loss=0.02742, over 37035.00 frames. ], tot_loss[loss=0.1619, simple_loss=0.2553, pruned_loss=0.03429, over 7193611.06 frames. ], batch size: 88, lr: 2.88e-03, grad_scale: 16.0 2023-05-11 14:52:23,633 WARNING [train.py:1182] (0/2) Exclude cut with ID 1112-1043-0006-89194-0_sp0.9 from training. Duration: 21.8333125 2023-05-11 14:52:23,822 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.balancer2.prob, batch_count=679670.0, ans=0.125 2023-05-11 14:52:23,933 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff3_skip_rate, batch_count=679670.0, ans=0.0 2023-05-11 14:52:35,205 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0031-94921-0 from training. Duration: 20.47 2023-05-11 14:52:57,921 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.ff2_skip_rate, batch_count=679770.0, ans=0.0 2023-05-11 14:53:02,366 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.bypass_mid.scale_min, batch_count=679770.0, ans=0.2 2023-05-11 14:53:09,867 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 14:53:16,209 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=679820.0, ans=0.1 2023-05-11 14:53:30,255 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.balancer.max_positive, batch_count=679870.0, ans=0.95 2023-05-11 14:53:31,701 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass_mid.scale_min, batch_count=679870.0, ans=0.2 2023-05-11 14:53:37,294 INFO [train.py:1021] (0/2) Epoch 38, batch 1450, loss[loss=0.1692, simple_loss=0.2668, pruned_loss=0.03575, over 37061.00 frames. ], tot_loss[loss=0.1616, simple_loss=0.2549, pruned_loss=0.03421, over 7184753.32 frames. ], batch size: 110, lr: 2.88e-03, grad_scale: 16.0 2023-05-11 14:53:40,409 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0037-39912-0_sp0.9 from training. Duration: 20.67225 2023-05-11 14:53:59,644 INFO [checkpoint.py:75] (0/2) Saving checkpoint to pruned_transducer_stateless7/exp1119-smaller-md1500/checkpoint-136000.pt 2023-05-11 14:54:02,228 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0024-94914-0_sp0.9 from training. Duration: 25.2444375 2023-05-11 14:54:03,298 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=2.53 vs. limit=6.0 2023-05-11 14:54:18,926 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass_mid.scale_min, batch_count=680020.0, ans=0.2 2023-05-11 14:54:21,849 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.attention_skip_rate, batch_count=680070.0, ans=0.0 2023-05-11 14:54:22,987 WARNING [train.py:1182] (0/2) Exclude cut with ID 3340-169293-0021-76797-0_sp0.9 from training. Duration: 21.1445 2023-05-11 14:54:31,669 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.527e+02 3.243e+02 3.827e+02 4.701e+02 7.942e+02, threshold=7.654e+02, percent-clipped=3.0 2023-05-11 14:54:48,371 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=680120.0, ans=0.125 2023-05-11 14:54:52,340 INFO [train.py:1021] (0/2) Epoch 38, batch 1500, loss[loss=0.1531, simple_loss=0.2409, pruned_loss=0.03259, over 37090.00 frames. ], tot_loss[loss=0.1615, simple_loss=0.2548, pruned_loss=0.03409, over 7197551.89 frames. ], batch size: 94, lr: 2.88e-03, grad_scale: 16.0 2023-05-11 14:54:59,250 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module1.balancer1.prob, batch_count=680170.0, ans=0.125 2023-05-11 14:55:19,733 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer1.prob, batch_count=680220.0, ans=0.125 2023-05-11 14:55:24,724 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=4.23 vs. limit=15.0 2023-05-11 14:55:37,613 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.6829, 4.9728, 5.1453, 4.8237], device='cuda:0') 2023-05-11 14:55:38,860 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0079-62383-0_sp0.9 from training. Duration: 33.038875 2023-05-11 14:56:06,590 INFO [train.py:1021] (0/2) Epoch 38, batch 1550, loss[loss=0.1541, simple_loss=0.2506, pruned_loss=0.02883, over 37098.00 frames. ], tot_loss[loss=0.1613, simple_loss=0.2546, pruned_loss=0.03396, over 7201866.16 frames. ], batch size: 103, lr: 2.88e-03, grad_scale: 16.0 2023-05-11 14:56:19,599 WARNING [train.py:1182] (0/2) Exclude cut with ID 6426-64291-0000-16059-0_sp0.9 from training. Duration: 20.0944375 2023-05-11 14:56:31,988 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward2.hidden_balancer.prob, batch_count=680470.0, ans=0.125 2023-05-11 14:56:33,307 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0044-62348-0_sp1.1 from training. Duration: 20.4 2023-05-11 14:56:35,231 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_skip_rate, batch_count=680520.0, ans=0.0 2023-05-11 14:56:41,923 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62851-0022-91297-0 from training. Duration: 20.085 2023-05-11 14:56:52,811 WARNING [train.py:1182] (0/2) Exclude cut with ID 4860-13185-0032-76709-0_sp0.9 from training. Duration: 23.07775 2023-05-11 14:56:53,009 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass.skip_rate, batch_count=680570.0, ans=0.07 2023-05-11 14:56:59,814 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.524e+02 3.016e+02 3.460e+02 3.993e+02 6.947e+02, threshold=6.919e+02, percent-clipped=0.0 2023-05-11 14:57:00,194 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer2.prob, batch_count=680570.0, ans=0.125 2023-05-11 14:57:14,939 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.9276, 3.9920, 4.6869, 4.8442], device='cuda:0') 2023-05-11 14:57:20,829 INFO [train.py:1021] (0/2) Epoch 38, batch 1600, loss[loss=0.1734, simple_loss=0.2691, pruned_loss=0.03885, over 36301.00 frames. ], tot_loss[loss=0.1611, simple_loss=0.2545, pruned_loss=0.03379, over 7199935.58 frames. ], batch size: 126, lr: 2.88e-03, grad_scale: 32.0 2023-05-11 14:57:31,236 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.5026, 3.6614, 3.3806, 4.2673, 3.0918, 3.6417, 4.3503, 3.7269], device='cuda:0') 2023-05-11 14:57:41,599 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0044-62348-0_sp0.9 from training. Duration: 24.9333125 2023-05-11 14:58:27,349 WARNING [train.py:1182] (0/2) Exclude cut with ID 5118-111612-0016-124680-0_sp0.9 from training. Duration: 20.388875 2023-05-11 14:58:33,842 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0017-62487-0_sp1.1 from training. Duration: 20.3590625 2023-05-11 14:58:35,135 INFO [train.py:1021] (0/2) Epoch 38, batch 1650, loss[loss=0.1726, simple_loss=0.2661, pruned_loss=0.03955, over 34738.00 frames. ], tot_loss[loss=0.1615, simple_loss=0.255, pruned_loss=0.03395, over 7198156.44 frames. ], batch size: 145, lr: 2.88e-03, grad_scale: 32.0 2023-05-11 14:58:53,292 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=5.09 vs. limit=15.0 2023-05-11 14:58:55,618 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward3.hidden_balancer.prob, batch_count=680970.0, ans=0.125 2023-05-11 14:59:19,460 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.balancer2.prob, batch_count=681070.0, ans=0.125 2023-05-11 14:59:19,505 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer1.prob, batch_count=681070.0, ans=0.125 2023-05-11 14:59:28,375 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.414e+02 3.318e+02 3.774e+02 4.615e+02 7.825e+02, threshold=7.547e+02, percent-clipped=1.0 2023-05-11 14:59:43,173 WARNING [train.py:1182] (0/2) Exclude cut with ID 3557-8342-0013-54691-0_sp1.1 from training. Duration: 0.836375 2023-05-11 14:59:48,984 INFO [train.py:1021] (0/2) Epoch 38, batch 1700, loss[loss=0.1635, simple_loss=0.2526, pruned_loss=0.03719, over 36859.00 frames. ], tot_loss[loss=0.1623, simple_loss=0.2558, pruned_loss=0.03445, over 7196947.34 frames. ], batch size: 96, lr: 2.88e-03, grad_scale: 32.0 2023-05-11 15:00:09,987 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=681220.0, ans=0.125 2023-05-11 15:00:24,738 WARNING [train.py:1182] (0/2) Exclude cut with ID 8565-290391-0049-67394-0_sp0.9 from training. Duration: 21.3166875 2023-05-11 15:00:55,460 WARNING [train.py:1182] (0/2) Exclude cut with ID 6533-399-0029-104863-0_sp0.9 from training. Duration: 22.1055625 2023-05-11 15:01:02,790 INFO [train.py:1021] (0/2) Epoch 38, batch 1750, loss[loss=0.1994, simple_loss=0.2867, pruned_loss=0.05609, over 24543.00 frames. ], tot_loss[loss=0.1636, simple_loss=0.2564, pruned_loss=0.03541, over 7174206.27 frames. ], batch size: 233, lr: 2.88e-03, grad_scale: 16.0 2023-05-11 15:01:03,076 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=681420.0, ans=0.1 2023-05-11 15:01:04,327 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0094-26379-0_sp1.1 from training. Duration: 21.77725 2023-05-11 15:01:10,904 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.hidden_balancer.prob, batch_count=681420.0, ans=0.125 2023-05-11 15:01:23,917 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.5963, 4.9372, 5.0723, 4.7558], device='cuda:0') 2023-05-11 15:01:25,147 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0005-134304-0_sp0.9 from training. Duration: 27.8166875 2023-05-11 15:01:37,024 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module2.balancer1.min_positive, batch_count=681520.0, ans=0.025 2023-05-11 15:01:50,421 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0021-15852-0_sp1.1 from training. Duration: 22.5090625 2023-05-11 15:01:57,469 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.915e+02 3.573e+02 4.154e+02 4.714e+02 7.519e+02, threshold=8.307e+02, percent-clipped=0.0 2023-05-11 15:01:57,587 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0005-134304-0 from training. Duration: 25.035 2023-05-11 15:02:16,970 INFO [train.py:1021] (0/2) Epoch 38, batch 1800, loss[loss=0.1597, simple_loss=0.2492, pruned_loss=0.03508, over 37168.00 frames. ], tot_loss[loss=0.1648, simple_loss=0.2567, pruned_loss=0.03642, over 7177214.51 frames. ], batch size: 102, lr: 2.88e-03, grad_scale: 16.0 2023-05-11 15:02:17,072 WARNING [train.py:1182] (0/2) Exclude cut with ID 774-127930-0014-10412-0_sp1.1 from training. Duration: 0.95 2023-05-11 15:02:36,512 WARNING [train.py:1182] (0/2) Exclude cut with ID 3033-130750-0096-55598-0_sp0.9 from training. Duration: 0.92225 2023-05-11 15:02:44,101 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer2.prob, batch_count=681720.0, ans=0.125 2023-05-11 15:03:04,561 WARNING [train.py:1182] (0/2) Exclude cut with ID 4511-76322-0006-80011-0 from training. Duration: 21.97 2023-05-11 15:03:24,800 WARNING [train.py:1182] (0/2) Exclude cut with ID 7492-105653-0055-62765-0_sp0.9 from training. Duration: 21.97225 2023-05-11 15:03:24,828 WARNING [train.py:1182] (0/2) Exclude cut with ID 453-131332-0000-47844-0_sp0.9 from training. Duration: 25.3333125 2023-05-11 15:03:29,883 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.bypass.skip_rate, batch_count=681920.0, ans=0.09899494936611666 2023-05-11 15:03:30,994 INFO [train.py:1021] (0/2) Epoch 38, batch 1850, loss[loss=0.1897, simple_loss=0.2782, pruned_loss=0.05064, over 36426.00 frames. ], tot_loss[loss=0.1653, simple_loss=0.2566, pruned_loss=0.03701, over 7192771.98 frames. ], batch size: 126, lr: 2.88e-03, grad_scale: 16.0 2023-05-11 15:03:34,545 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=1.86 vs. limit=6.0 2023-05-11 15:03:35,435 WARNING [train.py:1182] (0/2) Exclude cut with ID 5172-29468-0015-19128-0_sp0.9 from training. Duration: 21.5055625 2023-05-11 15:03:37,172 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_skip_rate, batch_count=681920.0, ans=0.0 2023-05-11 15:03:44,333 WARNING [train.py:1182] (0/2) Exclude cut with ID 453-131332-0000-47844-0_sp1.1 from training. Duration: 20.72725 2023-05-11 15:04:18,870 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0030-130156-0_sp0.9 from training. Duration: 26.32775 2023-05-11 15:04:22,018 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=682070.0, ans=0.1 2023-05-11 15:04:25,962 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.971e+02 3.456e+02 3.751e+02 4.294e+02 6.300e+02, threshold=7.502e+02, percent-clipped=0.0 2023-05-11 15:04:45,253 INFO [train.py:1021] (0/2) Epoch 38, batch 1900, loss[loss=0.1744, simple_loss=0.2637, pruned_loss=0.04251, over 37027.00 frames. ], tot_loss[loss=0.1663, simple_loss=0.2568, pruned_loss=0.0379, over 7142432.13 frames. ], batch size: 99, lr: 2.88e-03, grad_scale: 16.0 2023-05-11 15:04:47,164 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=682170.0, ans=0.1 2023-05-11 15:04:48,223 WARNING [train.py:1182] (0/2) Exclude cut with ID 3867-173237-0077-144769-0 from training. Duration: 20.025 2023-05-11 15:04:51,340 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff3_skip_rate, batch_count=682170.0, ans=0.0 2023-05-11 15:04:52,582 WARNING [train.py:1182] (0/2) Exclude cut with ID 6709-74022-0004-86860-0_sp1.1 from training. Duration: 0.9409375 2023-05-11 15:04:52,600 WARNING [train.py:1182] (0/2) Exclude cut with ID 4757-1811-0023-62229-0_sp0.9 from training. Duration: 21.37775 2023-05-11 15:05:13,202 WARNING [train.py:1182] (0/2) Exclude cut with ID 1250-135782-0004-25974-0_sp0.9 from training. Duration: 21.17225 2023-05-11 15:05:13,211 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0021-15852-0_sp0.9 from training. Duration: 27.511125 2023-05-11 15:05:22,121 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.5427, 4.8808, 5.0380, 4.7356], device='cuda:0') 2023-05-11 15:05:47,305 WARNING [train.py:1182] (0/2) Exclude cut with ID 453-131332-0000-47844-0 from training. Duration: 22.8 2023-05-11 15:05:51,504 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0040-44509-0 from training. Duration: 22.585 2023-05-11 15:05:59,399 INFO [train.py:1021] (0/2) Epoch 38, batch 1950, loss[loss=0.1693, simple_loss=0.258, pruned_loss=0.04031, over 36854.00 frames. ], tot_loss[loss=0.1669, simple_loss=0.2568, pruned_loss=0.03851, over 7158710.89 frames. ], batch size: 96, lr: 2.88e-03, grad_scale: 16.0 2023-05-11 15:06:22,639 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0001-146967-0_sp0.9 from training. Duration: 22.0166875 2023-05-11 15:06:37,545 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0003-134302-0_sp1.1 from training. Duration: 24.395375 2023-05-11 15:06:44,784 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0017-41203-0_sp0.9 from training. Duration: 27.47775 2023-05-11 15:06:48,183 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0017-62487-0_sp0.9 from training. Duration: 24.8833125 2023-05-11 15:06:51,004 WARNING [train.py:1182] (0/2) Exclude cut with ID 6758-72288-0033-108368-0 from training. Duration: 23.39 2023-05-11 15:06:53,880 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 3.062e+02 3.830e+02 4.278e+02 5.017e+02 7.626e+02, threshold=8.556e+02, percent-clipped=1.0 2023-05-11 15:06:56,893 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0007-12994-0_sp0.9 from training. Duration: 28.72225 2023-05-11 15:06:57,095 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.attention_skip_rate, batch_count=682620.0, ans=0.0 2023-05-11 15:07:05,786 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=682620.0, ans=0.0 2023-05-11 15:07:07,100 WARNING [train.py:1182] (0/2) Exclude cut with ID 585-294811-0110-133686-0_sp0.9 from training. Duration: 20.8944375 2023-05-11 15:07:12,800 INFO [train.py:1021] (0/2) Epoch 38, batch 2000, loss[loss=0.1817, simple_loss=0.2709, pruned_loss=0.0462, over 36016.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2578, pruned_loss=0.0395, over 7128365.66 frames. ], batch size: 133, lr: 2.87e-03, grad_scale: 32.0 2023-05-11 15:07:22,837 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=256, metric=3.71 vs. limit=15.0 2023-05-11 15:07:23,624 WARNING [train.py:1182] (0/2) Exclude cut with ID 5796-66357-0007-116447-0_sp0.9 from training. Duration: 23.8444375 2023-05-11 15:07:49,021 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0007-12994-0 from training. Duration: 25.85 2023-05-11 15:07:49,028 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0023-13010-0 from training. Duration: 21.39 2023-05-11 15:07:59,223 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0014-15845-0 from training. Duration: 27.92 2023-05-11 15:07:59,521 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer1.max_abs, batch_count=682820.0, ans=10.0 2023-05-11 15:08:02,412 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 15:08:18,969 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 15:08:23,405 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_skip_rate, batch_count=682870.0, ans=0.0 2023-05-11 15:08:26,211 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.ff3_skip_rate, batch_count=682920.0, ans=0.0 2023-05-11 15:08:27,337 INFO [train.py:1021] (0/2) Epoch 38, batch 2050, loss[loss=0.1642, simple_loss=0.2601, pruned_loss=0.03421, over 37075.00 frames. ], tot_loss[loss=0.168, simple_loss=0.257, pruned_loss=0.03948, over 7155426.39 frames. ], batch size: 103, lr: 2.87e-03, grad_scale: 32.0 2023-05-11 15:08:28,891 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0039-130165-0_sp0.9 from training. Duration: 20.661125 2023-05-11 15:08:44,153 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_skip_rate, batch_count=682970.0, ans=0.0 2023-05-11 15:08:53,950 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0043-15874-0_sp0.9 from training. Duration: 20.07225 2023-05-11 15:08:55,679 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff2_skip_rate, batch_count=683020.0, ans=0.0 2023-05-11 15:09:00,985 WARNING [train.py:1182] (0/2) Exclude cut with ID 1085-156170-0017-128270-0 from training. Duration: 21.01 2023-05-11 15:09:22,567 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.964e+02 3.564e+02 4.092e+02 4.501e+02 6.894e+02, threshold=8.185e+02, percent-clipped=0.0 2023-05-11 15:09:33,038 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=683120.0, ans=0.1 2023-05-11 15:09:37,533 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=683120.0, ans=0.0 2023-05-11 15:09:41,687 INFO [train.py:1021] (0/2) Epoch 38, batch 2100, loss[loss=0.1828, simple_loss=0.27, pruned_loss=0.04782, over 36712.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2566, pruned_loss=0.03973, over 7149284.17 frames. ], batch size: 122, lr: 2.87e-03, grad_scale: 32.0 2023-05-11 15:09:49,257 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.ff3_skip_rate, batch_count=683170.0, ans=0.0 2023-05-11 15:09:52,672 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([1.7661, 2.6528, 4.4437, 2.9219], device='cuda:0') 2023-05-11 15:10:08,297 WARNING [train.py:1182] (0/2) Exclude cut with ID 2195-150901-0045-59933-0 from training. Duration: 20.65 2023-05-11 15:10:10,552 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=5.37 vs. limit=12.0 2023-05-11 15:10:14,780 WARNING [train.py:1182] (0/2) Exclude cut with ID 5796-66357-0007-116447-0 from training. Duration: 21.46 2023-05-11 15:10:38,830 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=256, metric=5.68 vs. limit=15.0 2023-05-11 15:10:43,438 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer2.prob, batch_count=683370.0, ans=0.125 2023-05-11 15:10:55,774 INFO [train.py:1021] (0/2) Epoch 38, batch 2150, loss[loss=0.1794, simple_loss=0.2687, pruned_loss=0.04506, over 32346.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2567, pruned_loss=0.03999, over 7136598.80 frames. ], batch size: 170, lr: 2.87e-03, grad_scale: 16.0 2023-05-11 15:10:57,312 WARNING [train.py:1182] (0/2) Exclude cut with ID 3557-8342-0013-54691-0 from training. Duration: 0.92 2023-05-11 15:11:05,255 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0023-13010-0_sp0.9 from training. Duration: 23.7666875 2023-05-11 15:11:12,763 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.bypass.skip_rate, batch_count=683470.0, ans=0.09899494936611666 2023-05-11 15:11:43,421 WARNING [train.py:1182] (0/2) Exclude cut with ID 8544-281189-0060-101339-0_sp0.9 from training. Duration: 20.861125 2023-05-11 15:11:45,173 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.attention_skip_rate, batch_count=683570.0, ans=0.0 2023-05-11 15:11:52,083 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.715e+02 3.344e+02 3.734e+02 4.156e+02 5.994e+02, threshold=7.468e+02, percent-clipped=0.0 2023-05-11 15:11:54,262 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-65654-0031-41259-0_sp0.9 from training. Duration: 22.711125 2023-05-11 15:11:54,518 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer1.prob, batch_count=683620.0, ans=0.125 2023-05-11 15:12:10,119 INFO [train.py:1021] (0/2) Epoch 38, batch 2200, loss[loss=0.1532, simple_loss=0.236, pruned_loss=0.03517, over 36973.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2565, pruned_loss=0.04032, over 7137102.10 frames. ], batch size: 95, lr: 2.87e-03, grad_scale: 16.0 2023-05-11 15:12:17,544 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer1.max_abs, batch_count=683670.0, ans=10.0 2023-05-11 15:12:26,017 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.balancer2.prob, batch_count=683720.0, ans=0.125 2023-05-11 15:12:27,398 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.ff2_skip_rate, batch_count=683720.0, ans=0.0 2023-05-11 15:12:36,467 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0043-132310-0_sp1.1 from training. Duration: 22.986375 2023-05-11 15:12:46,528 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=256, metric=18.62 vs. limit=22.5 2023-05-11 15:12:54,403 WARNING [train.py:1182] (0/2) Exclude cut with ID 8040-260924-0003-80960-0_sp0.9 from training. Duration: 22.07225 2023-05-11 15:12:56,669 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=5.17 vs. limit=15.0 2023-05-11 15:12:57,430 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.const_attention_rate, batch_count=683820.0, ans=0.025 2023-05-11 15:12:58,628 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0045-26330-0_sp0.9 from training. Duration: 20.3055625 2023-05-11 15:13:00,313 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=683820.0, ans=0.0 2023-05-11 15:13:01,495 WARNING [train.py:1182] (0/2) Exclude cut with ID 6356-271890-0060-94317-0_sp0.9 from training. Duration: 20.72225 2023-05-11 15:13:10,798 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.ff2_skip_rate, batch_count=683870.0, ans=0.0 2023-05-11 15:13:15,268 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.ff3_skip_rate, batch_count=683870.0, ans=0.0 2023-05-11 15:13:17,756 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0017-41203-0_sp1.1 from training. Duration: 22.4818125 2023-05-11 15:13:23,950 INFO [train.py:1021] (0/2) Epoch 38, batch 2250, loss[loss=0.1733, simple_loss=0.263, pruned_loss=0.04179, over 32252.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2559, pruned_loss=0.04013, over 7129888.78 frames. ], batch size: 170, lr: 2.87e-03, grad_scale: 16.0 2023-05-11 15:13:46,364 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0040-44509-0_sp0.9 from training. Duration: 25.0944375 2023-05-11 15:13:50,793 WARNING [train.py:1182] (0/2) Exclude cut with ID 6533-399-0047-104881-0 from training. Duration: 21.515 2023-05-11 15:13:51,124 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer1.prob, batch_count=683970.0, ans=0.125 2023-05-11 15:13:56,641 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0009-15840-0_sp0.9 from training. Duration: 27.02225 2023-05-11 15:14:02,415 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0010-62480-0_sp0.9 from training. Duration: 22.22225 2023-05-11 15:14:07,050 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module2.balancer1.prob, batch_count=684070.0, ans=0.125 2023-05-11 15:14:08,362 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0085-44554-0_sp0.9 from training. Duration: 20.85 2023-05-11 15:14:20,361 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.958e+02 3.553e+02 4.133e+02 5.089e+02 7.689e+02, threshold=8.267e+02, percent-clipped=1.0 2023-05-11 15:14:26,843 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=1.90 vs. limit=6.0 2023-05-11 15:14:34,208 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([4.3583, 3.6748, 3.9574, 3.9608], device='cuda:0') 2023-05-11 15:14:38,070 INFO [train.py:1021] (0/2) Epoch 38, batch 2300, loss[loss=0.1507, simple_loss=0.232, pruned_loss=0.03467, over 37059.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2559, pruned_loss=0.0402, over 7134066.60 frames. ], batch size: 88, lr: 2.87e-03, grad_scale: 16.0 2023-05-11 15:14:41,149 WARNING [train.py:1182] (0/2) Exclude cut with ID 4295-39940-0007-92567-0 from training. Duration: 21.54 2023-05-11 15:14:44,347 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=684170.0, ans=0.0 2023-05-11 15:14:47,017 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0040-44509-0_sp1.1 from training. Duration: 20.5318125 2023-05-11 15:14:55,714 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0012-134311-0_sp0.9 from training. Duration: 21.9333125 2023-05-11 15:15:00,361 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer1.prob, batch_count=684220.0, ans=0.125 2023-05-11 15:15:00,868 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=3.53 vs. limit=6.0 2023-05-11 15:15:15,519 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.skip_rate, batch_count=684270.0, ans=0.04949747468305833 2023-05-11 15:15:26,058 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_skip_rate, batch_count=684320.0, ans=0.0 2023-05-11 15:15:43,408 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0025-130151-0_sp0.9 from training. Duration: 21.7944375 2023-05-11 15:15:51,865 INFO [train.py:1021] (0/2) Epoch 38, batch 2350, loss[loss=0.1525, simple_loss=0.2369, pruned_loss=0.03403, over 37083.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2562, pruned_loss=0.04052, over 7129340.43 frames. ], batch size: 88, lr: 2.87e-03, grad_scale: 16.0 2023-05-11 15:15:56,484 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0002-12989-0_sp0.9 from training. Duration: 22.4666875 2023-05-11 15:16:04,202 WARNING [train.py:1182] (0/2) Exclude cut with ID 6121-9014-0076-24124-0 from training. Duration: 21.635 2023-05-11 15:16:05,950 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=684470.0, ans=0.1 2023-05-11 15:16:10,386 WARNING [train.py:1182] (0/2) Exclude cut with ID 6121-9014-0076-24124-0_sp0.9 from training. Duration: 24.038875 2023-05-11 15:16:48,036 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.902e+02 3.715e+02 4.268e+02 5.320e+02 8.425e+02, threshold=8.536e+02, percent-clipped=1.0 2023-05-11 15:16:54,562 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0009-134308-0_sp1.1 from training. Duration: 21.786375 2023-05-11 15:16:59,226 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([6.0490, 5.2434, 5.3515, 5.9381], device='cuda:0') 2023-05-11 15:16:59,894 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=4.10 vs. limit=6.0 2023-05-11 15:17:01,119 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=684620.0, ans=0.1 2023-05-11 15:17:05,357 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0002-12989-0 from training. Duration: 20.22 2023-05-11 15:17:06,706 INFO [train.py:1021] (0/2) Epoch 38, batch 2400, loss[loss=0.174, simple_loss=0.2655, pruned_loss=0.04123, over 36862.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2558, pruned_loss=0.04033, over 7142377.37 frames. ], batch size: 113, lr: 2.87e-03, grad_scale: 32.0 2023-05-11 15:17:13,007 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.balancer2.prob, batch_count=684670.0, ans=0.125 2023-05-11 15:17:38,945 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=684770.0, ans=0.1 2023-05-11 15:17:40,449 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass_mid.scale_min, batch_count=684770.0, ans=0.2 2023-05-11 15:17:45,268 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module2.balancer1.min_positive, batch_count=684770.0, ans=0.025 2023-05-11 15:17:49,825 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.4607, 4.7159, 2.3968, 2.5453], device='cuda:0') 2023-05-11 15:17:55,942 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 15:18:20,250 INFO [train.py:1021] (0/2) Epoch 38, batch 2450, loss[loss=0.1751, simple_loss=0.2634, pruned_loss=0.04338, over 32247.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.2562, pruned_loss=0.04041, over 7139712.05 frames. ], batch size: 170, lr: 2.87e-03, grad_scale: 32.0 2023-05-11 15:18:38,726 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module1.balancer1.prob, batch_count=684970.0, ans=0.125 2023-05-11 15:18:40,149 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([6.2250, 5.4461, 5.5715, 6.1285], device='cuda:0') 2023-05-11 15:18:41,515 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=684970.0, ans=0.125 2023-05-11 15:18:48,214 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.3339, 4.5093, 2.4251, 2.4760], device='cuda:0') 2023-05-11 15:18:49,437 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass_mid.scale_min, batch_count=685020.0, ans=0.2 2023-05-11 15:19:09,760 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0043-132310-0 from training. Duration: 25.285 2023-05-11 15:19:16,982 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.876e+02 3.452e+02 3.965e+02 4.458e+02 7.309e+02, threshold=7.930e+02, percent-clipped=0.0 2023-05-11 15:19:34,990 INFO [train.py:1021] (0/2) Epoch 38, batch 2500, loss[loss=0.1607, simple_loss=0.2504, pruned_loss=0.03553, over 37154.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2554, pruned_loss=0.04019, over 7138205.82 frames. ], batch size: 102, lr: 2.87e-03, grad_scale: 32.0 2023-05-11 15:19:53,334 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer2.prob, batch_count=685220.0, ans=0.125 2023-05-11 15:20:09,095 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.balancer2.prob, batch_count=685270.0, ans=0.125 2023-05-11 15:20:14,625 WARNING [train.py:1182] (0/2) Exclude cut with ID 811-130148-0001-63453-0_sp0.9 from training. Duration: 20.861125 2023-05-11 15:20:14,840 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=685270.0, ans=0.1 2023-05-11 15:20:26,098 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass.skip_rate, batch_count=685320.0, ans=0.035 2023-05-11 15:20:37,281 WARNING [train.py:1182] (0/2) Exclude cut with ID 6010-56788-0055-90261-0 from training. Duration: 20.88 2023-05-11 15:20:48,777 INFO [train.py:1021] (0/2) Epoch 38, batch 2550, loss[loss=0.1696, simple_loss=0.2623, pruned_loss=0.03845, over 32292.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2554, pruned_loss=0.04009, over 7104874.67 frames. ], batch size: 170, lr: 2.87e-03, grad_scale: 32.0 2023-05-11 15:20:50,637 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=685420.0, ans=0.1 2023-05-11 15:21:10,103 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0045-15876-0_sp0.9 from training. Duration: 23.4166875 2023-05-11 15:21:22,624 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.ff3_skip_rate, batch_count=685520.0, ans=0.0 2023-05-11 15:21:27,545 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=2.79 vs. limit=6.0 2023-05-11 15:21:28,485 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.const_attention_rate, batch_count=685520.0, ans=0.025 2023-05-11 15:21:45,176 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.773e+02 3.485e+02 3.858e+02 4.449e+02 6.490e+02, threshold=7.717e+02, percent-clipped=0.0 2023-05-11 15:21:57,252 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.balancer1.prob, batch_count=685620.0, ans=0.125 2023-05-11 15:22:00,831 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=192, metric=5.48 vs. limit=15.0 2023-05-11 15:22:02,516 INFO [train.py:1021] (0/2) Epoch 38, batch 2600, loss[loss=0.1738, simple_loss=0.2665, pruned_loss=0.04059, over 36717.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2556, pruned_loss=0.04027, over 7122777.90 frames. ], batch size: 118, lr: 2.87e-03, grad_scale: 32.0 2023-05-11 15:22:03,533 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass_mid.scale_min, batch_count=685670.0, ans=0.2 2023-05-11 15:22:29,608 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0006-134305-0 from training. Duration: 21.24 2023-05-11 15:22:29,626 WARNING [train.py:1182] (0/2) Exclude cut with ID 6533-399-0047-104881-0_sp0.9 from training. Duration: 23.9055625 2023-05-11 15:22:44,586 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=685770.0, ans=0.125 2023-05-11 15:23:01,402 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=685870.0, ans=0.1 2023-05-11 15:23:05,610 WARNING [train.py:1182] (0/2) Exclude cut with ID 6758-72288-0033-108368-0_sp0.9 from training. Duration: 25.988875 2023-05-11 15:23:12,875 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0001-134300-0_sp0.9 from training. Duration: 20.67225 2023-05-11 15:23:17,143 INFO [train.py:1021] (0/2) Epoch 38, batch 2650, loss[loss=0.1477, simple_loss=0.2308, pruned_loss=0.03228, over 36816.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2552, pruned_loss=0.04021, over 7113289.92 frames. ], batch size: 89, lr: 2.87e-03, grad_scale: 32.0 2023-05-11 15:23:33,278 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer2.prob, batch_count=685970.0, ans=0.125 2023-05-11 15:23:50,807 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=8.25 vs. limit=15.0 2023-05-11 15:23:52,536 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=192, metric=4.01 vs. limit=15.0 2023-05-11 15:24:02,386 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0038-41224-0 from training. Duration: 20.34 2023-05-11 15:24:13,863 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.841e+02 3.363e+02 3.713e+02 4.235e+02 6.464e+02, threshold=7.426e+02, percent-clipped=0.0 2023-05-11 15:24:28,781 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass.scale_min, batch_count=686120.0, ans=0.2 2023-05-11 15:24:31,367 INFO [train.py:1021] (0/2) Epoch 38, batch 2700, loss[loss=0.1571, simple_loss=0.2465, pruned_loss=0.03383, over 37014.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2551, pruned_loss=0.04003, over 7132854.51 frames. ], batch size: 99, lr: 2.87e-03, grad_scale: 32.0 2023-05-11 15:24:31,849 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([3.2770, 4.1911, 4.7430, 4.9667], device='cuda:0') 2023-05-11 15:24:47,329 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer2.prob, batch_count=686220.0, ans=0.125 2023-05-11 15:25:03,250 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.0.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 15:25:16,126 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0026-15857-0_sp0.9 from training. Duration: 25.061125 2023-05-11 15:25:27,513 WARNING [train.py:1182] (0/2) Exclude cut with ID 3033-130750-0096-55598-0 from training. Duration: 0.83 2023-05-11 15:25:46,375 INFO [train.py:1021] (0/2) Epoch 38, batch 2750, loss[loss=0.1654, simple_loss=0.2514, pruned_loss=0.03974, over 37048.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2554, pruned_loss=0.04024, over 7131297.39 frames. ], batch size: 94, lr: 2.87e-03, grad_scale: 32.0 2023-05-11 15:25:55,148 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0017-41203-0 from training. Duration: 24.73 2023-05-11 15:25:56,051 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.conv_module1.whiten, num_groups=1, num_channels=192, metric=9.26 vs. limit=15.0 2023-05-11 15:26:06,617 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0009-134308-0 from training. Duration: 23.965 2023-05-11 15:26:16,735 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0030-146996-0_sp0.9 from training. Duration: 22.088875 2023-05-11 15:26:21,379 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=686520.0, ans=0.125 2023-05-11 15:26:33,775 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0006-134305-0_sp0.9 from training. Duration: 23.6 2023-05-11 15:26:42,198 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.984e+02 3.507e+02 3.960e+02 4.780e+02 8.726e+02, threshold=7.920e+02, percent-clipped=2.0 2023-05-11 15:26:46,749 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.ff2_skip_rate, batch_count=686620.0, ans=0.0 2023-05-11 15:26:59,491 INFO [train.py:1021] (0/2) Epoch 38, batch 2800, loss[loss=0.1491, simple_loss=0.2381, pruned_loss=0.03004, over 37083.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2556, pruned_loss=0.04029, over 7117256.34 frames. ], batch size: 94, lr: 2.87e-03, grad_scale: 32.0 2023-05-11 15:27:01,616 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=256, metric=7.36 vs. limit=15.0 2023-05-11 15:27:15,992 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer2.prob, batch_count=686720.0, ans=0.125 2023-05-11 15:27:23,818 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.const_attention_rate, batch_count=686720.0, ans=0.025 2023-05-11 15:27:29,470 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module1.balancer2.prob, batch_count=686770.0, ans=0.125 2023-05-11 15:27:39,662 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.bypass.scale_min, batch_count=686770.0, ans=0.2 2023-05-11 15:28:14,044 INFO [train.py:1021] (0/2) Epoch 38, batch 2850, loss[loss=0.1691, simple_loss=0.2626, pruned_loss=0.03778, over 37106.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2559, pruned_loss=0.04032, over 7090668.14 frames. ], batch size: 107, lr: 2.87e-03, grad_scale: 16.0 2023-05-11 15:28:14,175 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0024-13011-0 from training. Duration: 23.795 2023-05-11 15:28:17,187 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_skip_rate, batch_count=686920.0, ans=0.0 2023-05-11 15:28:21,722 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.7802, 5.0356, 5.2727, 4.9645], device='cuda:0') 2023-05-11 15:28:31,513 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0030-130156-0_sp1.1 from training. Duration: 21.5409375 2023-05-11 15:28:33,048 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0019-146985-0_sp0.9 from training. Duration: 24.97775 2023-05-11 15:28:43,017 WARNING [train.py:1182] (0/2) Exclude cut with ID 1085-156170-0017-128270-0_sp0.9 from training. Duration: 23.3444375 2023-05-11 15:29:11,721 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.971e+02 3.543e+02 3.999e+02 4.780e+02 6.439e+02, threshold=7.997e+02, percent-clipped=0.0 2023-05-11 15:29:11,838 WARNING [train.py:1182] (0/2) Exclude cut with ID 6010-56788-0055-90261-0_sp0.9 from training. Duration: 23.2 2023-05-11 15:29:19,052 WARNING [train.py:1182] (0/2) Exclude cut with ID 5653-46179-0060-117930-0_sp0.9 from training. Duration: 21.17225 2023-05-11 15:29:19,365 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.attention_skip_rate, batch_count=687120.0, ans=0.0 2023-05-11 15:29:27,814 INFO [train.py:1021] (0/2) Epoch 38, batch 2900, loss[loss=0.1762, simple_loss=0.2702, pruned_loss=0.04108, over 37012.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.2563, pruned_loss=0.04028, over 7102447.68 frames. ], batch size: 104, lr: 2.87e-03, grad_scale: 16.0 2023-05-11 15:29:37,610 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=5.13 vs. limit=15.0 2023-05-11 15:29:38,181 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0008-134307-0_sp0.9 from training. Duration: 24.6555625 2023-05-11 15:30:16,963 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.self_attn1.whiten, num_groups=1, num_channels=192, metric=10.88 vs. limit=22.5 2023-05-11 15:30:34,992 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 15:30:37,525 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-65654-0031-41259-0 from training. Duration: 20.44 2023-05-11 15:30:40,083 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=687370.0, ans=0.125 2023-05-11 15:30:41,605 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([1.8534, 3.1028, 4.6235, 3.1872], device='cuda:0') 2023-05-11 15:30:42,728 INFO [train.py:1021] (0/2) Epoch 38, batch 2950, loss[loss=0.1402, simple_loss=0.225, pruned_loss=0.02772, over 37175.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2563, pruned_loss=0.04009, over 7101507.38 frames. ], batch size: 93, lr: 2.86e-03, grad_scale: 16.0 2023-05-11 15:30:44,351 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_skip_rate, batch_count=687420.0, ans=0.0 2023-05-11 15:30:49,370 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.7711, 3.5681, 3.3743, 4.2141, 2.5906, 3.6246, 4.2524, 3.6508], device='cuda:0') 2023-05-11 15:30:52,022 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0018-132285-0_sp0.9 from training. Duration: 23.45 2023-05-11 15:30:59,439 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=687470.0, ans=0.125 2023-05-11 15:31:04,091 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.whiten1.whitening_limit, batch_count=687470.0, ans=10.0 2023-05-11 15:31:19,833 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.1235, 3.1886, 4.6036, 3.2353], device='cuda:0') 2023-05-11 15:31:22,360 WARNING [train.py:1182] (0/2) Exclude cut with ID 6945-60535-0076-12784-0_sp0.9 from training. Duration: 20.52225 2023-05-11 15:31:28,449 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0008-134307-0 from training. Duration: 22.19 2023-05-11 15:31:32,370 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.2.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 15:31:33,837 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=687570.0, ans=0.1 2023-05-11 15:31:41,193 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.727e+02 3.449e+02 3.763e+02 4.356e+02 5.878e+02, threshold=7.526e+02, percent-clipped=0.0 2023-05-11 15:31:41,287 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0014-15845-0_sp1.1 from training. Duration: 25.3818125 2023-05-11 15:31:57,318 INFO [train.py:1021] (0/2) Epoch 38, batch 3000, loss[loss=0.1564, simple_loss=0.2422, pruned_loss=0.03527, over 37050.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2561, pruned_loss=0.04013, over 7095382.18 frames. ], batch size: 94, lr: 2.86e-03, grad_scale: 16.0 2023-05-11 15:31:57,320 INFO [train.py:1048] (0/2) Computing validation loss 2023-05-11 15:32:10,397 INFO [train.py:1057] (0/2) Epoch 38, validation: loss=0.1517, simple_loss=0.2518, pruned_loss=0.02581, over 944034.00 frames. 2023-05-11 15:32:10,398 INFO [train.py:1058] (0/2) Maximum memory allocated so far is 18788MB 2023-05-11 15:32:11,897 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0043-132310-0_sp0.9 from training. Duration: 28.0944375 2023-05-11 15:32:15,506 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=3.29 vs. limit=15.0 2023-05-11 15:32:18,465 WARNING [train.py:1182] (0/2) Exclude cut with ID 2195-150901-0045-59933-0_sp0.9 from training. Duration: 22.9444375 2023-05-11 15:32:27,668 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0024-13011-0_sp1.1 from training. Duration: 21.6318125 2023-05-11 15:32:34,926 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.balancer2.prob, batch_count=687720.0, ans=0.125 2023-05-11 15:32:42,071 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0030-130156-0 from training. Duration: 23.695 2023-05-11 15:32:47,885 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.bypass.scale_min, batch_count=687770.0, ans=0.2 2023-05-11 15:32:58,282 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass_mid.scale_min, batch_count=687820.0, ans=0.2 2023-05-11 15:33:05,655 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.ff2_skip_rate, batch_count=687820.0, ans=0.0 2023-05-11 15:33:07,358 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0094-26379-0 from training. Duration: 23.955 2023-05-11 15:33:07,714 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=687820.0, ans=0.1 2023-05-11 15:33:12,104 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=687870.0, ans=0.1 2023-05-11 15:33:14,131 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.6042, 4.9860, 5.1066, 4.8047], device='cuda:0') 2023-05-11 15:33:25,214 INFO [train.py:1021] (0/2) Epoch 38, batch 3050, loss[loss=0.1817, simple_loss=0.2701, pruned_loss=0.04666, over 34789.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.256, pruned_loss=0.04043, over 7044276.62 frames. ], batch size: 145, lr: 2.86e-03, grad_scale: 16.0 2023-05-11 15:33:38,533 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.balancer2.prob, batch_count=687970.0, ans=0.125 2023-05-11 15:33:41,216 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0024-13011-0_sp0.9 from training. Duration: 26.438875 2023-05-11 15:33:41,882 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=5.48 vs. limit=15.0 2023-05-11 15:33:53,401 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.const_attention_rate, batch_count=688020.0, ans=0.025 2023-05-11 15:34:12,105 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer2.prob, batch_count=688070.0, ans=0.125 2023-05-11 15:34:13,670 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=688070.0, ans=0.1 2023-05-11 15:34:23,860 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.901e+02 3.404e+02 3.836e+02 4.367e+02 8.737e+02, threshold=7.673e+02, percent-clipped=2.0 2023-05-11 15:34:28,318 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0021-26306-0_sp0.9 from training. Duration: 21.2444375 2023-05-11 15:34:28,349 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0014-15845-0_sp0.9 from training. Duration: 31.02225 2023-05-11 15:34:35,664 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=688120.0, ans=0.0 2023-05-11 15:34:38,323 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0017-62487-0 from training. Duration: 22.395 2023-05-11 15:34:39,727 INFO [train.py:1021] (0/2) Epoch 38, batch 3100, loss[loss=0.1844, simple_loss=0.2706, pruned_loss=0.04907, over 36758.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2561, pruned_loss=0.04036, over 7090053.44 frames. ], batch size: 122, lr: 2.86e-03, grad_scale: 16.0 2023-05-11 15:34:41,421 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_skip_rate, batch_count=688170.0, ans=0.0 2023-05-11 15:34:56,928 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0045-15876-0 from training. Duration: 21.075 2023-05-11 15:35:01,403 WARNING [train.py:1182] (0/2) Exclude cut with ID 6482-98857-0025-147532-0_sp0.9 from training. Duration: 20.0055625 2023-05-11 15:35:01,415 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0037-132304-0_sp0.9 from training. Duration: 22.05 2023-05-11 15:35:01,735 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.2949, 4.1782, 3.8445, 4.1533, 3.4907, 3.1516, 3.5544, 3.0411], device='cuda:0') 2023-05-11 15:35:02,790 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0003-134302-0 from training. Duration: 26.8349375 2023-05-11 15:35:04,278 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0009-15840-0_sp1.1 from training. Duration: 22.1090625 2023-05-11 15:35:13,197 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0094-26379-0_sp0.9 from training. Duration: 26.6166875 2023-05-11 15:35:16,427 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([4.8310, 4.0074, 4.4186, 4.4328], device='cuda:0') 2023-05-11 15:35:22,659 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=4.52 vs. limit=15.0 2023-05-11 15:35:30,611 WARNING [train.py:1182] (0/2) Exclude cut with ID 2046-178027-0000-53705-0_sp0.9 from training. Duration: 20.3055625 2023-05-11 15:35:35,395 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 15:35:50,576 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.0.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 15:35:53,317 WARNING [train.py:1182] (0/2) Exclude cut with ID 7205-50138-0008-5373-0_sp0.9 from training. Duration: 20.7 2023-05-11 15:35:54,768 INFO [train.py:1021] (0/2) Epoch 38, batch 3150, loss[loss=0.1701, simple_loss=0.2591, pruned_loss=0.04055, over 32432.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.256, pruned_loss=0.04021, over 7090121.35 frames. ], batch size: 170, lr: 2.86e-03, grad_scale: 16.0 2023-05-11 15:36:35,626 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0019-146985-0 from training. Duration: 22.48 2023-05-11 15:36:52,249 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.944e+02 3.497e+02 3.889e+02 4.352e+02 6.694e+02, threshold=7.779e+02, percent-clipped=0.0 2023-05-11 15:36:53,846 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0003-134302-0_sp0.9 from training. Duration: 29.816625 2023-05-11 15:37:08,283 INFO [train.py:1021] (0/2) Epoch 38, batch 3200, loss[loss=0.174, simple_loss=0.2658, pruned_loss=0.04108, over 36362.00 frames. ], tot_loss[loss=0.1675, simple_loss=0.2552, pruned_loss=0.03991, over 7097829.76 frames. ], batch size: 126, lr: 2.86e-03, grad_scale: 32.0 2023-05-11 15:37:12,812 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.5290, 4.8844, 4.8951, 5.4168], device='cuda:0') 2023-05-11 15:37:15,430 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0005-134304-0_sp1.1 from training. Duration: 22.7590625 2023-05-11 15:37:15,616 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward2.hidden_balancer.prob, batch_count=688670.0, ans=0.125 2023-05-11 15:37:19,878 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0026-15857-0 from training. Duration: 22.555 2023-05-11 15:37:41,580 WARNING [train.py:1182] (0/2) Exclude cut with ID 1250-135782-0005-25975-0_sp0.9 from training. Duration: 21.688875 2023-05-11 15:37:50,188 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.ff2_skip_rate, batch_count=688770.0, ans=0.0 2023-05-11 15:38:14,877 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0038-41224-0_sp0.9 from training. Duration: 22.6 2023-05-11 15:38:15,084 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass.skip_rate, batch_count=688870.0, ans=0.035 2023-05-11 15:38:22,603 INFO [train.py:1021] (0/2) Epoch 38, batch 3250, loss[loss=0.1686, simple_loss=0.2636, pruned_loss=0.03675, over 37161.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2554, pruned_loss=0.03997, over 7095473.16 frames. ], batch size: 112, lr: 2.86e-03, grad_scale: 32.0 2023-05-11 15:38:30,227 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=688920.0, ans=0.1 2023-05-11 15:38:40,937 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=6.25 vs. limit=15.0 2023-05-11 15:38:54,712 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0009-15840-0 from training. Duration: 24.32 2023-05-11 15:38:55,053 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.ff3_skip_rate, batch_count=689020.0, ans=0.0 2023-05-11 15:39:19,876 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.749e+02 3.481e+02 3.994e+02 4.701e+02 6.744e+02, threshold=7.989e+02, percent-clipped=0.0 2023-05-11 15:39:36,141 INFO [train.py:1021] (0/2) Epoch 38, batch 3300, loss[loss=0.1632, simple_loss=0.2521, pruned_loss=0.0371, over 37170.00 frames. ], tot_loss[loss=0.1669, simple_loss=0.2545, pruned_loss=0.03964, over 7121155.57 frames. ], batch size: 102, lr: 2.86e-03, grad_scale: 32.0 2023-05-11 15:39:53,395 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-276745-0093-13116-0_sp0.9 from training. Duration: 21.061125 2023-05-11 15:40:06,452 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0024-15855-0_sp0.9 from training. Duration: 20.32225 2023-05-11 15:40:20,623 WARNING [train.py:1182] (0/2) Exclude cut with ID 3033-130750-0096-55598-0_sp1.1 from training. Duration: 0.7545625 2023-05-11 15:40:20,959 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.balancer.min_positive, batch_count=689320.0, ans=0.05 2023-05-11 15:40:29,632 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=689320.0, ans=0.125 2023-05-11 15:40:35,302 WARNING [train.py:1182] (0/2) Exclude cut with ID 4295-39940-0007-92567-0_sp0.9 from training. Duration: 23.9333125 2023-05-11 15:40:39,886 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([6.3341, 5.5967, 5.7019, 6.2118], device='cuda:0') 2023-05-11 15:40:49,821 INFO [train.py:1021] (0/2) Epoch 38, batch 3350, loss[loss=0.164, simple_loss=0.2595, pruned_loss=0.03424, over 36832.00 frames. ], tot_loss[loss=0.1674, simple_loss=0.255, pruned_loss=0.0399, over 7061819.51 frames. ], batch size: 111, lr: 2.86e-03, grad_scale: 32.0 2023-05-11 15:41:08,133 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0008-134307-0_sp1.1 from training. Duration: 20.17275 2023-05-11 15:41:09,945 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=689470.0, ans=0.1 2023-05-11 15:41:12,704 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0019-146985-0_sp1.1 from training. Duration: 20.436375 2023-05-11 15:41:30,217 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.4959, 5.3024, 4.6759, 5.0773], device='cuda:0') 2023-05-11 15:41:31,700 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_skip_rate, batch_count=689520.0, ans=0.0 2023-05-11 15:41:49,929 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.863e+02 3.453e+02 3.863e+02 4.480e+02 5.933e+02, threshold=7.726e+02, percent-clipped=0.0 2023-05-11 15:42:04,349 INFO [train.py:1021] (0/2) Epoch 38, batch 3400, loss[loss=0.1649, simple_loss=0.2603, pruned_loss=0.03481, over 37008.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2553, pruned_loss=0.03994, over 7078788.42 frames. ], batch size: 104, lr: 2.86e-03, grad_scale: 16.0 2023-05-11 15:42:06,215 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([3.0261, 4.1901, 4.5970, 4.8472], device='cuda:0') 2023-05-11 15:42:38,270 WARNING [train.py:1182] (0/2) Exclude cut with ID 4234-40345-0022-142709-0_sp0.9 from training. Duration: 23.1055625 2023-05-11 15:42:38,330 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0007-12994-0_sp1.1 from training. Duration: 23.5 2023-05-11 15:42:46,203 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=3.54 vs. limit=10.0 2023-05-11 15:42:48,334 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0009-134308-0_sp0.9 from training. Duration: 26.62775 2023-05-11 15:43:01,264 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0018-132285-0 from training. Duration: 21.105 2023-05-11 15:43:02,958 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=689870.0, ans=0.0 2023-05-11 15:43:05,692 WARNING [train.py:1182] (0/2) Exclude cut with ID 4511-76322-0006-80011-0_sp0.9 from training. Duration: 24.411125 2023-05-11 15:43:17,482 INFO [train.py:1021] (0/2) Epoch 38, batch 3450, loss[loss=0.1843, simple_loss=0.2689, pruned_loss=0.04987, over 24755.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2551, pruned_loss=0.03978, over 7084269.12 frames. ], batch size: 233, lr: 2.86e-03, grad_scale: 16.0 2023-05-11 15:43:30,518 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.self_attn2.whiten.whitening_limit, batch_count=689920.0, ans=22.5 2023-05-11 15:43:31,435 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer_ff3.min_abs, batch_count=689970.0, ans=0.2 2023-05-11 15:43:32,659 WARNING [train.py:1182] (0/2) Exclude cut with ID 6758-72288-0033-108368-0_sp1.1 from training. Duration: 21.263625 2023-05-11 15:43:34,254 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.const_attention_rate, batch_count=689970.0, ans=0.025 2023-05-11 15:43:50,105 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=690020.0, ans=0.0 2023-05-11 15:44:03,684 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=13.26 vs. limit=22.5 2023-05-11 15:44:05,779 WARNING [train.py:1182] (0/2) Exclude cut with ID 4234-40345-0022-142709-0 from training. Duration: 20.795 2023-05-11 15:44:15,748 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.705e+02 3.368e+02 3.617e+02 4.146e+02 7.677e+02, threshold=7.233e+02, percent-clipped=0.0 2023-05-11 15:44:16,494 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0021-15852-0 from training. Duration: 24.76 2023-05-11 15:44:16,513 WARNING [train.py:1182] (0/2) Exclude cut with ID 3867-173237-0077-144769-0_sp0.9 from training. Duration: 22.25 2023-05-11 15:44:20,285 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=690120.0, ans=0.125 2023-05-11 15:44:21,904 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.2.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 15:44:21,922 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.ff3_skip_rate, batch_count=690120.0, ans=0.0 2023-05-11 15:44:31,674 INFO [train.py:1021] (0/2) Epoch 38, batch 3500, loss[loss=0.1693, simple_loss=0.2637, pruned_loss=0.03747, over 36904.00 frames. ], tot_loss[loss=0.1674, simple_loss=0.2552, pruned_loss=0.03976, over 7094014.31 frames. ], batch size: 105, lr: 2.86e-03, grad_scale: 16.0 2023-05-11 15:44:34,273 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=2.19 vs. limit=6.0 2023-05-11 15:44:42,030 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0026-15857-0_sp1.1 from training. Duration: 20.5045625 2023-05-11 15:45:00,929 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.balancer1.prob, batch_count=690270.0, ans=0.125 2023-05-11 15:45:44,004 INFO [train.py:1021] (0/2) Epoch 38, batch 3550, loss[loss=0.1533, simple_loss=0.241, pruned_loss=0.03278, over 37203.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.256, pruned_loss=0.0399, over 7103949.34 frames. ], batch size: 93, lr: 2.86e-03, grad_scale: 16.0 2023-05-11 15:45:54,844 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=6.46 vs. limit=15.0 2023-05-11 15:46:08,894 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=6.68 vs. limit=15.0 2023-05-11 15:46:41,033 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.859e+02 3.351e+02 3.789e+02 4.574e+02 7.791e+02, threshold=7.577e+02, percent-clipped=2.0 2023-05-11 15:46:49,941 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff3_skip_rate, batch_count=690620.0, ans=0.0 2023-05-11 15:46:55,299 INFO [train.py:1021] (0/2) Epoch 38, batch 3600, loss[loss=0.1799, simple_loss=0.27, pruned_loss=0.04487, over 36388.00 frames. ], tot_loss[loss=0.167, simple_loss=0.2552, pruned_loss=0.03943, over 7121576.38 frames. ], batch size: 126, lr: 2.86e-03, grad_scale: 32.0 2023-05-11 15:46:58,431 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=690670.0, ans=0.125 2023-05-11 15:47:04,084 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer1.max_abs, batch_count=690670.0, ans=10.0 2023-05-11 15:47:12,655 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=690720.0, ans=0.125 2023-05-11 15:47:12,720 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=690720.0, ans=0.125 2023-05-11 15:47:16,010 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=3.66 vs. limit=10.0 2023-05-11 15:47:25,486 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=256, metric=3.97 vs. limit=15.0 2023-05-11 15:47:26,925 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=3.04 vs. limit=6.0 2023-05-11 15:47:27,580 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff3_skip_rate, batch_count=690770.0, ans=0.0 2023-05-11 15:47:30,915 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=192, metric=6.29 vs. limit=15.0 2023-05-11 15:47:44,878 INFO [checkpoint.py:75] (0/2) Saving checkpoint to pruned_transducer_stateless7/exp1119-smaller-md1500/epoch-38.pt 2023-05-11 15:47:59,298 WARNING [train.py:1182] (0/2) Exclude cut with ID 7859-102521-0017-7548-0_sp1.1 from training. Duration: 22.2954375 2023-05-11 15:48:03,801 INFO [train.py:1021] (0/2) Epoch 39, batch 0, loss[loss=0.1557, simple_loss=0.2448, pruned_loss=0.03331, over 36851.00 frames. ], tot_loss[loss=0.1557, simple_loss=0.2448, pruned_loss=0.03331, over 36851.00 frames. ], batch size: 96, lr: 2.82e-03, grad_scale: 32.0 2023-05-11 15:48:03,802 INFO [train.py:1048] (0/2) Computing validation loss 2023-05-11 15:48:16,566 INFO [train.py:1057] (0/2) Epoch 39, validation: loss=0.1526, simple_loss=0.2526, pruned_loss=0.02627, over 944034.00 frames. 2023-05-11 15:48:16,567 INFO [train.py:1058] (0/2) Maximum memory allocated so far is 18788MB 2023-05-11 15:48:23,021 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=4.75 vs. limit=15.0 2023-05-11 15:49:11,647 WARNING [train.py:1182] (0/2) Exclude cut with ID 298-126791-0067-24026-0_sp0.9 from training. Duration: 21.438875 2023-05-11 15:49:16,198 WARNING [train.py:1182] (0/2) Exclude cut with ID 5652-39938-0025-23684-0_sp0.9 from training. Duration: 22.2055625 2023-05-11 15:49:22,977 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=691050.0, ans=0.0 2023-05-11 15:49:31,621 INFO [train.py:1021] (0/2) Epoch 39, batch 50, loss[loss=0.1701, simple_loss=0.2647, pruned_loss=0.03773, over 36882.00 frames. ], tot_loss[loss=0.1636, simple_loss=0.2569, pruned_loss=0.03519, over 1645253.38 frames. ], batch size: 105, lr: 2.82e-03, grad_scale: 32.0 2023-05-11 15:49:37,593 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.701e+02 3.485e+02 3.942e+02 4.493e+02 6.935e+02, threshold=7.885e+02, percent-clipped=0.0 2023-05-11 15:50:17,787 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=8, num_channels=256, metric=3.09 vs. limit=6.0 2023-05-11 15:50:38,308 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff2_skip_rate, batch_count=691300.0, ans=0.0 2023-05-11 15:50:45,135 INFO [train.py:1021] (0/2) Epoch 39, batch 100, loss[loss=0.1728, simple_loss=0.2685, pruned_loss=0.03857, over 32390.00 frames. ], tot_loss[loss=0.162, simple_loss=0.2547, pruned_loss=0.03467, over 2882062.61 frames. ], batch size: 170, lr: 2.82e-03, grad_scale: 16.0 2023-05-11 15:50:58,721 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.self_attn2.whiten.whitening_limit, batch_count=691400.0, ans=22.5 2023-05-11 15:51:02,341 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=5.46 vs. limit=15.0 2023-05-11 15:51:16,376 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module1.balancer2.min_positive, batch_count=691450.0, ans=0.05 2023-05-11 15:51:45,684 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module2.balancer1.min_positive, batch_count=691550.0, ans=0.025 2023-05-11 15:51:54,883 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=691550.0, ans=0.125 2023-05-11 15:51:58,924 INFO [train.py:1021] (0/2) Epoch 39, batch 150, loss[loss=0.1455, simple_loss=0.2318, pruned_loss=0.02956, over 36790.00 frames. ], tot_loss[loss=0.1603, simple_loss=0.2525, pruned_loss=0.03402, over 3843243.21 frames. ], batch size: 89, lr: 2.82e-03, grad_scale: 16.0 2023-05-11 15:52:06,261 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.444e+02 2.943e+02 3.380e+02 3.943e+02 6.290e+02, threshold=6.759e+02, percent-clipped=0.0 2023-05-11 15:52:15,740 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.out_combiner.scale_min, batch_count=691650.0, ans=0.2 2023-05-11 15:52:18,563 WARNING [train.py:1182] (0/2) Exclude cut with ID 7859-102521-0017-7548-0 from training. Duration: 24.525 2023-05-11 15:52:26,048 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.scale_min, batch_count=691650.0, ans=0.2 2023-05-11 15:52:50,931 WARNING [train.py:1182] (0/2) Exclude cut with ID 3699-47246-0007-3408-0_sp0.9 from training. Duration: 20.26675 2023-05-11 15:53:04,251 WARNING [train.py:1182] (0/2) Exclude cut with ID 7859-102521-0017-7548-0_sp0.9 from training. Duration: 27.25 2023-05-11 15:53:09,028 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder_embed.convnext.layerdrop_rate, batch_count=691800.0, ans=0.015 2023-05-11 15:53:13,241 INFO [train.py:1021] (0/2) Epoch 39, batch 200, loss[loss=0.1396, simple_loss=0.2326, pruned_loss=0.02331, over 36852.00 frames. ], tot_loss[loss=0.1601, simple_loss=0.2524, pruned_loss=0.03388, over 4551789.28 frames. ], batch size: 96, lr: 2.82e-03, grad_scale: 16.0 2023-05-11 15:53:13,577 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=691850.0, ans=0.125 2023-05-11 15:53:37,295 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_skip_rate, batch_count=691900.0, ans=0.0 2023-05-11 15:54:19,720 WARNING [train.py:1182] (0/2) Exclude cut with ID 6426-64292-0017-15984-0 from training. Duration: 21.68 2023-05-11 15:54:20,032 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass.scale_min, batch_count=692050.0, ans=0.2 2023-05-11 15:54:26,832 INFO [train.py:1021] (0/2) Epoch 39, batch 250, loss[loss=0.1595, simple_loss=0.2574, pruned_loss=0.03084, over 36953.00 frames. ], tot_loss[loss=0.1593, simple_loss=0.2515, pruned_loss=0.03359, over 5146934.60 frames. ], batch size: 108, lr: 2.82e-03, grad_scale: 16.0 2023-05-11 15:54:31,936 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0007-59342-0 from training. Duration: 21.6300625 2023-05-11 15:54:34,762 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.479e+02 2.975e+02 3.257e+02 3.731e+02 7.823e+02, threshold=6.514e+02, percent-clipped=1.0 2023-05-11 15:54:55,993 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0007-59342-0_sp0.9 from training. Duration: 24.033375 2023-05-11 15:55:12,636 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 15:55:41,468 INFO [train.py:1021] (0/2) Epoch 39, batch 300, loss[loss=0.1657, simple_loss=0.2596, pruned_loss=0.03588, over 36745.00 frames. ], tot_loss[loss=0.1594, simple_loss=0.2515, pruned_loss=0.03368, over 5598414.86 frames. ], batch size: 118, lr: 2.82e-03, grad_scale: 16.0 2023-05-11 15:55:58,153 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0009-59344-0 from training. Duration: 22.905 2023-05-11 15:55:58,195 WARNING [train.py:1182] (0/2) Exclude cut with ID 5622-44585-0006-90525-0_sp1.1 from training. Duration: 23.4318125 2023-05-11 15:56:16,515 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=5.83 vs. limit=15.0 2023-05-11 15:56:21,852 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.0.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 15:56:55,333 INFO [train.py:1021] (0/2) Epoch 39, batch 350, loss[loss=0.1763, simple_loss=0.2725, pruned_loss=0.04004, over 36737.00 frames. ], tot_loss[loss=0.1598, simple_loss=0.2522, pruned_loss=0.0337, over 5987104.50 frames. ], batch size: 122, lr: 2.82e-03, grad_scale: 16.0 2023-05-11 15:57:00,031 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=692600.0, ans=0.1 2023-05-11 15:57:02,611 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.367e+02 2.923e+02 3.264e+02 3.700e+02 5.853e+02, threshold=6.528e+02, percent-clipped=0.0 2023-05-11 15:57:19,505 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([4.3746, 3.6685, 4.0069, 3.9559], device='cuda:0') 2023-05-11 15:57:36,097 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.balancer2.prob, batch_count=692700.0, ans=0.125 2023-05-11 15:57:37,374 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module2.balancer2.prob, batch_count=692700.0, ans=0.125 2023-05-11 15:57:47,529 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass_mid.scale_min, batch_count=692750.0, ans=0.2 2023-05-11 15:57:51,790 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.balancer2.prob, batch_count=692750.0, ans=0.125 2023-05-11 15:57:57,328 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0009-59344-0_sp1.1 from training. Duration: 20.82275 2023-05-11 15:57:58,884 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0009-59344-0_sp0.9 from training. Duration: 25.45 2023-05-11 15:58:09,373 INFO [train.py:1021] (0/2) Epoch 39, batch 400, loss[loss=0.1502, simple_loss=0.2342, pruned_loss=0.03309, over 37012.00 frames. ], tot_loss[loss=0.1607, simple_loss=0.2534, pruned_loss=0.03404, over 6242558.66 frames. ], batch size: 86, lr: 2.82e-03, grad_scale: 32.0 2023-05-11 15:58:27,078 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=692900.0, ans=0.125 2023-05-11 15:58:42,115 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer1.prob, batch_count=692950.0, ans=0.125 2023-05-11 15:58:49,587 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=256, metric=6.57 vs. limit=15.0 2023-05-11 15:58:57,208 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=693000.0, ans=0.125 2023-05-11 15:58:57,243 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass.scale_min, batch_count=693000.0, ans=0.2 2023-05-11 15:58:58,341 WARNING [train.py:1182] (0/2) Exclude cut with ID 5622-44585-0006-90525-0 from training. Duration: 25.775 2023-05-11 15:59:15,950 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=693050.0, ans=0.1 2023-05-11 15:59:18,546 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0071-62375-0_sp0.9 from training. Duration: 22.25 2023-05-11 15:59:23,431 INFO [train.py:1021] (0/2) Epoch 39, batch 450, loss[loss=0.1545, simple_loss=0.2457, pruned_loss=0.03168, over 36849.00 frames. ], tot_loss[loss=0.1614, simple_loss=0.2544, pruned_loss=0.03416, over 6451777.88 frames. ], batch size: 96, lr: 2.82e-03, grad_scale: 32.0 2023-05-11 15:59:26,624 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_skip_rate, batch_count=693100.0, ans=0.0 2023-05-11 15:59:29,478 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer1.min_positive, batch_count=693100.0, ans=0.025 2023-05-11 15:59:30,660 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.552e+02 2.992e+02 3.474e+02 3.992e+02 6.713e+02, threshold=6.949e+02, percent-clipped=1.0 2023-05-11 15:59:36,898 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=693150.0, ans=0.1 2023-05-11 15:59:37,009 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass.skip_rate, batch_count=693150.0, ans=0.07 2023-05-11 15:59:47,315 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.nonlin_attention.balancer.min_positive, batch_count=693150.0, ans=0.05 2023-05-11 15:59:48,505 WARNING [train.py:1182] (0/2) Exclude cut with ID 3972-170212-0014-23379-0 from training. Duration: 26.205 2023-05-11 16:00:06,034 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0047-9341-0_sp0.9 from training. Duration: 30.1555625 2023-05-11 16:00:10,377 WARNING [train.py:1182] (0/2) Exclude cut with ID 1265-135635-0050-6781-0_sp0.9 from training. Duration: 21.8333125 2023-05-11 16:00:11,958 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=693250.0, ans=0.125 2023-05-11 16:00:18,627 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=693250.0, ans=0.0 2023-05-11 16:00:21,211 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0024-94914-0_sp1.1 from training. Duration: 20.6545625 2023-05-11 16:00:21,532 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.3989, 3.9977, 3.6848, 3.9828, 3.3809, 3.0297, 3.4676, 2.9272], device='cuda:0') 2023-05-11 16:00:37,025 INFO [train.py:1021] (0/2) Epoch 39, batch 500, loss[loss=0.17, simple_loss=0.2674, pruned_loss=0.03629, over 32240.00 frames. ], tot_loss[loss=0.1621, simple_loss=0.2553, pruned_loss=0.03444, over 6613482.41 frames. ], batch size: 170, lr: 2.82e-03, grad_scale: 32.0 2023-05-11 16:01:06,387 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0045-39920-0_sp0.9 from training. Duration: 20.52225 2023-05-11 16:01:24,771 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module2.balancer1.prob, batch_count=693500.0, ans=0.125 2023-05-11 16:01:28,835 WARNING [train.py:1182] (0/2) Exclude cut with ID 3972-170212-0014-23379-0_sp0.9 from training. Duration: 29.1166875 2023-05-11 16:01:51,473 INFO [train.py:1021] (0/2) Epoch 39, batch 550, loss[loss=0.1523, simple_loss=0.2449, pruned_loss=0.02979, over 36971.00 frames. ], tot_loss[loss=0.1617, simple_loss=0.2549, pruned_loss=0.0343, over 6773054.81 frames. ], batch size: 95, lr: 2.81e-03, grad_scale: 32.0 2023-05-11 16:01:58,647 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.417e+02 3.047e+02 3.512e+02 4.480e+02 8.240e+02, threshold=7.024e+02, percent-clipped=1.0 2023-05-11 16:02:27,531 WARNING [train.py:1182] (0/2) Exclude cut with ID 543-133211-0007-59831-0_sp0.9 from training. Duration: 21.388875 2023-05-11 16:02:33,339 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=693700.0, ans=0.1 2023-05-11 16:02:33,796 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=5.19 vs. limit=15.0 2023-05-11 16:03:04,183 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0024-94914-0 from training. Duration: 22.72 2023-05-11 16:03:05,613 INFO [train.py:1021] (0/2) Epoch 39, batch 600, loss[loss=0.1474, simple_loss=0.2305, pruned_loss=0.03212, over 35407.00 frames. ], tot_loss[loss=0.1623, simple_loss=0.2553, pruned_loss=0.03468, over 6851163.30 frames. ], batch size: 78, lr: 2.81e-03, grad_scale: 32.0 2023-05-11 16:03:05,699 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0031-94921-0_sp0.9 from training. Duration: 22.7444375 2023-05-11 16:03:51,395 WARNING [train.py:1182] (0/2) Exclude cut with ID 4133-6541-0027-40495-0_sp1.1 from training. Duration: 0.9681875 2023-05-11 16:03:54,793 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62851-0022-91297-0_sp0.9 from training. Duration: 22.3166875 2023-05-11 16:04:00,545 WARNING [train.py:1182] (0/2) Exclude cut with ID 543-133212-0015-59917-0_sp0.9 from training. Duration: 21.8166875 2023-05-11 16:04:02,154 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass_mid.scale_min, batch_count=694000.0, ans=0.2 2023-05-11 16:04:14,302 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=694050.0, ans=0.1 2023-05-11 16:04:19,797 INFO [train.py:1021] (0/2) Epoch 39, batch 650, loss[loss=0.1499, simple_loss=0.2371, pruned_loss=0.03131, over 36799.00 frames. ], tot_loss[loss=0.1622, simple_loss=0.2555, pruned_loss=0.03444, over 6942251.65 frames. ], batch size: 89, lr: 2.81e-03, grad_scale: 32.0 2023-05-11 16:04:27,173 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.593e+02 3.034e+02 3.327e+02 4.071e+02 7.625e+02, threshold=6.655e+02, percent-clipped=2.0 2023-05-11 16:04:33,580 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass.scale_min, batch_count=694150.0, ans=0.2 2023-05-11 16:04:37,219 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.self_attn2.whiten, num_groups=1, num_channels=192, metric=11.95 vs. limit=22.5 2023-05-11 16:04:38,145 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=256, metric=6.61 vs. limit=15.0 2023-05-11 16:04:55,914 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.scale_min, batch_count=694200.0, ans=0.2 2023-05-11 16:05:23,597 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_skip_rate, batch_count=694300.0, ans=0.0 2023-05-11 16:05:33,479 INFO [train.py:1021] (0/2) Epoch 39, batch 700, loss[loss=0.1569, simple_loss=0.2566, pruned_loss=0.02861, over 37073.00 frames. ], tot_loss[loss=0.1621, simple_loss=0.2555, pruned_loss=0.03436, over 7016904.36 frames. ], batch size: 110, lr: 2.81e-03, grad_scale: 32.0 2023-05-11 16:05:38,899 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module1.balancer1.prob, batch_count=694350.0, ans=0.125 2023-05-11 16:05:45,643 WARNING [train.py:1182] (0/2) Exclude cut with ID 4957-30119-0041-23990-0_sp0.9 from training. Duration: 20.22775 2023-05-11 16:05:54,401 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder_embed.convnext.hidden_balancer.prob, batch_count=694400.0, ans=0.125 2023-05-11 16:06:29,467 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0047-9341-0_sp1.1 from training. Duration: 24.67275 2023-05-11 16:06:44,593 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer2.prob, batch_count=694550.0, ans=0.125 2023-05-11 16:06:47,218 INFO [train.py:1021] (0/2) Epoch 39, batch 750, loss[loss=0.1682, simple_loss=0.2719, pruned_loss=0.03222, over 36945.00 frames. ], tot_loss[loss=0.1624, simple_loss=0.2559, pruned_loss=0.03442, over 7067490.13 frames. ], batch size: 108, lr: 2.81e-03, grad_scale: 32.0 2023-05-11 16:06:50,969 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_skip_rate, batch_count=694600.0, ans=0.0 2023-05-11 16:06:55,053 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.515e+02 3.088e+02 3.524e+02 4.361e+02 8.285e+02, threshold=7.049e+02, percent-clipped=2.0 2023-05-11 16:06:59,663 WARNING [train.py:1182] (0/2) Exclude cut with ID 3082-165428-0081-50734-0_sp0.9 from training. Duration: 21.8055625 2023-05-11 16:07:13,315 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=256, metric=8.27 vs. limit=22.5 2023-05-11 16:07:28,110 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=694700.0, ans=0.1 2023-05-11 16:07:35,388 WARNING [train.py:1182] (0/2) Exclude cut with ID 3340-169293-0054-76830-0_sp0.9 from training. Duration: 22.6666875 2023-05-11 16:07:38,448 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass_mid.scale_min, batch_count=694750.0, ans=0.2 2023-05-11 16:07:45,737 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=144, metric=9.07 vs. limit=10.0 2023-05-11 16:08:02,117 INFO [train.py:1021] (0/2) Epoch 39, batch 800, loss[loss=0.1513, simple_loss=0.2357, pruned_loss=0.03348, over 37069.00 frames. ], tot_loss[loss=0.1624, simple_loss=0.2557, pruned_loss=0.03458, over 7106762.76 frames. ], batch size: 88, lr: 2.81e-03, grad_scale: 32.0 2023-05-11 16:08:02,408 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=694850.0, ans=0.1 2023-05-11 16:08:08,229 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.3524, 4.4532, 2.1937, 2.4450], device='cuda:0') 2023-05-11 16:08:33,779 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer2.prob, batch_count=694950.0, ans=0.125 2023-05-11 16:08:39,941 WARNING [train.py:1182] (0/2) Exclude cut with ID 2411-132532-0017-82279-0_sp1.1 from training. Duration: 0.9681875 2023-05-11 16:08:43,205 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module1.balancer1.prob, batch_count=694950.0, ans=0.125 2023-05-11 16:08:51,655 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.0.self_attn_weights, attn_weights_entropy = tensor([6.3344, 5.6072, 5.4515, 6.0618], device='cuda:0') 2023-05-11 16:09:00,368 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module1.balancer2.min_positive, batch_count=695050.0, ans=0.05 2023-05-11 16:09:02,131 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.whiten, num_groups=1, num_channels=256, metric=3.63 vs. limit=12.0 2023-05-11 16:09:05,753 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62850-0007-91323-0 from training. Duration: 22.485 2023-05-11 16:09:16,247 INFO [train.py:1021] (0/2) Epoch 39, batch 850, loss[loss=0.1791, simple_loss=0.2783, pruned_loss=0.03999, over 36044.00 frames. ], tot_loss[loss=0.163, simple_loss=0.2566, pruned_loss=0.03466, over 7135468.09 frames. ], batch size: 133, lr: 2.81e-03, grad_scale: 32.0 2023-05-11 16:09:20,680 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer2.prob, batch_count=695100.0, ans=0.125 2023-05-11 16:09:23,298 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.475e+02 3.034e+02 3.446e+02 3.998e+02 7.439e+02, threshold=6.892e+02, percent-clipped=2.0 2023-05-11 16:09:28,069 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.bypass.skip_rate, batch_count=695100.0, ans=0.07 2023-05-11 16:09:44,559 WARNING [train.py:1182] (0/2) Exclude cut with ID 3972-170212-0014-23379-0_sp1.1 from training. Duration: 23.82275 2023-05-11 16:09:50,647 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass.scale_min, batch_count=695200.0, ans=0.2 2023-05-11 16:09:57,464 WARNING [train.py:1182] (0/2) Exclude cut with ID 4860-13185-0032-76709-0 from training. Duration: 20.77 2023-05-11 16:10:06,577 WARNING [train.py:1182] (0/2) Exclude cut with ID 6426-64292-0017-15984-0_sp0.9 from training. Duration: 24.088875 2023-05-11 16:10:09,588 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module1.balancer2.prob, batch_count=695250.0, ans=0.125 2023-05-11 16:10:10,987 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=695250.0, ans=0.1 2023-05-11 16:10:30,013 INFO [train.py:1021] (0/2) Epoch 39, batch 900, loss[loss=0.1702, simple_loss=0.2672, pruned_loss=0.03663, over 37100.00 frames. ], tot_loss[loss=0.1621, simple_loss=0.2554, pruned_loss=0.03436, over 7156998.31 frames. ], batch size: 107, lr: 2.81e-03, grad_scale: 32.0 2023-05-11 16:10:37,381 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62850-0007-91323-0_sp1.1 from training. Duration: 20.4409375 2023-05-11 16:10:40,432 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.balancer2.prob, batch_count=695350.0, ans=0.125 2023-05-11 16:10:44,751 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer2.prob, batch_count=695400.0, ans=0.125 2023-05-11 16:11:16,154 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_skip_rate, batch_count=695500.0, ans=0.0 2023-05-11 16:11:30,721 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.9352, 4.1128, 4.6599, 4.8341], device='cuda:0') 2023-05-11 16:11:32,560 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=256, metric=13.63 vs. limit=22.5 2023-05-11 16:11:39,288 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff2_skip_rate, batch_count=695550.0, ans=0.0 2023-05-11 16:11:43,350 INFO [train.py:1021] (0/2) Epoch 39, batch 950, loss[loss=0.1511, simple_loss=0.2451, pruned_loss=0.0285, over 36855.00 frames. ], tot_loss[loss=0.1613, simple_loss=0.2547, pruned_loss=0.03394, over 7186189.82 frames. ], batch size: 96, lr: 2.81e-03, grad_scale: 16.0 2023-05-11 16:11:45,765 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=256, metric=10.51 vs. limit=22.5 2023-05-11 16:11:52,000 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.460e+02 3.030e+02 3.344e+02 4.078e+02 8.078e+02, threshold=6.687e+02, percent-clipped=1.0 2023-05-11 16:11:58,538 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0062-62366-0_sp0.9 from training. Duration: 22.511125 2023-05-11 16:11:58,562 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0031-39906-0 from training. Duration: 20.675 2023-05-11 16:12:03,076 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module1.balancer1.min_positive, batch_count=695650.0, ans=0.025 2023-05-11 16:12:39,949 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer2.prob, batch_count=695750.0, ans=0.125 2023-05-11 16:12:57,683 INFO [train.py:1021] (0/2) Epoch 39, batch 1000, loss[loss=0.1761, simple_loss=0.2729, pruned_loss=0.03968, over 36402.00 frames. ], tot_loss[loss=0.1616, simple_loss=0.2551, pruned_loss=0.03406, over 7181686.26 frames. ], batch size: 126, lr: 2.81e-03, grad_scale: 16.0 2023-05-11 16:12:57,974 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=695850.0, ans=0.125 2023-05-11 16:13:11,670 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([1.9651, 3.0714, 4.6673, 3.2896], device='cuda:0') 2023-05-11 16:13:39,204 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62850-0007-91323-0_sp0.9 from training. Duration: 24.9833125 2023-05-11 16:13:45,797 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff2_skip_rate, batch_count=696000.0, ans=0.0 2023-05-11 16:13:52,041 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=256, metric=5.21 vs. limit=15.0 2023-05-11 16:14:02,326 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 16:14:10,689 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0047-9341-0 from training. Duration: 27.14 2023-05-11 16:14:12,035 INFO [train.py:1021] (0/2) Epoch 39, batch 1050, loss[loss=0.1882, simple_loss=0.2754, pruned_loss=0.05048, over 24474.00 frames. ], tot_loss[loss=0.1618, simple_loss=0.2554, pruned_loss=0.03413, over 7167135.90 frames. ], batch size: 233, lr: 2.81e-03, grad_scale: 16.0 2023-05-11 16:14:15,208 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.0.self_attn_weights, attn_weights_entropy = tensor([6.0995, 5.4820, 5.2616, 5.8925], device='cuda:0') 2023-05-11 16:14:20,746 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.503e+02 3.140e+02 3.657e+02 4.157e+02 7.852e+02, threshold=7.315e+02, percent-clipped=2.0 2023-05-11 16:14:25,131 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0044-62348-0 from training. Duration: 22.44 2023-05-11 16:14:32,459 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=696150.0, ans=0.0 2023-05-11 16:14:36,414 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=256, metric=7.94 vs. limit=22.5 2023-05-11 16:14:38,964 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=696150.0, ans=0.125 2023-05-11 16:14:40,837 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=256, metric=8.36 vs. limit=22.5 2023-05-11 16:14:54,101 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer2.prob, batch_count=696200.0, ans=0.125 2023-05-11 16:15:25,470 INFO [train.py:1021] (0/2) Epoch 39, batch 1100, loss[loss=0.1677, simple_loss=0.2625, pruned_loss=0.03641, over 37195.00 frames. ], tot_loss[loss=0.163, simple_loss=0.2567, pruned_loss=0.03462, over 7164869.32 frames. ], batch size: 102, lr: 2.81e-03, grad_scale: 16.0 2023-05-11 16:15:44,087 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0060-62364-0_sp0.9 from training. Duration: 21.361125 2023-05-11 16:15:47,168 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.out_combiner.scale_min, batch_count=696400.0, ans=0.2 2023-05-11 16:15:51,316 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0079-62383-0_sp1.1 from training. Duration: 27.0318125 2023-05-11 16:16:01,633 WARNING [train.py:1182] (0/2) Exclude cut with ID 5622-44585-0006-90525-0_sp0.9 from training. Duration: 28.638875 2023-05-11 16:16:15,858 WARNING [train.py:1182] (0/2) Exclude cut with ID 3340-169293-0054-76830-0 from training. Duration: 20.4 2023-05-11 16:16:39,647 INFO [train.py:1021] (0/2) Epoch 39, batch 1150, loss[loss=0.1384, simple_loss=0.226, pruned_loss=0.0254, over 35022.00 frames. ], tot_loss[loss=0.1623, simple_loss=0.256, pruned_loss=0.03432, over 7185064.87 frames. ], batch size: 77, lr: 2.81e-03, grad_scale: 16.0 2023-05-11 16:16:48,451 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.555e+02 3.055e+02 3.328e+02 3.900e+02 5.916e+02, threshold=6.656e+02, percent-clipped=0.0 2023-05-11 16:16:48,577 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0071-62375-0 from training. Duration: 20.025 2023-05-11 16:16:48,786 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=696600.0, ans=0.0 2023-05-11 16:16:49,915 WARNING [train.py:1182] (0/2) Exclude cut with ID 2364-131735-0112-64612-0_sp0.9 from training. Duration: 20.488875 2023-05-11 16:16:55,757 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0079-62383-0 from training. Duration: 29.735 2023-05-11 16:16:55,973 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer2.prob, batch_count=696650.0, ans=0.125 2023-05-11 16:17:35,823 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=696750.0, ans=0.1 2023-05-11 16:17:51,964 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([4.9858, 4.3221, 4.4971, 4.2065], device='cuda:0') 2023-05-11 16:17:53,050 INFO [train.py:1021] (0/2) Epoch 39, batch 1200, loss[loss=0.1737, simple_loss=0.2709, pruned_loss=0.0383, over 32686.00 frames. ], tot_loss[loss=0.162, simple_loss=0.2556, pruned_loss=0.03421, over 7175505.67 frames. ], batch size: 170, lr: 2.81e-03, grad_scale: 32.0 2023-05-11 16:17:53,469 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=696850.0, ans=0.1 2023-05-11 16:17:59,109 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.balancer2.prob, batch_count=696850.0, ans=0.125 2023-05-11 16:18:01,975 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.ff3_skip_rate, batch_count=696850.0, ans=0.0 2023-05-11 16:18:12,686 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.1069, 3.2440, 4.7496, 3.3110], device='cuda:0') 2023-05-11 16:18:14,497 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer1.prob, batch_count=696900.0, ans=0.125 2023-05-11 16:18:17,186 WARNING [train.py:1182] (0/2) Exclude cut with ID 7276-92427-0014-12983-0_sp0.9 from training. Duration: 21.3055625 2023-05-11 16:18:18,629 WARNING [train.py:1182] (0/2) Exclude cut with ID 1025-75365-0008-79168-0_sp0.9 from training. Duration: 22.0666875 2023-05-11 16:18:22,528 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=192, metric=5.38 vs. limit=15.0 2023-05-11 16:18:23,206 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_skip_rate, batch_count=696950.0, ans=0.0 2023-05-11 16:18:40,706 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.const_attention_rate, batch_count=697000.0, ans=0.025 2023-05-11 16:18:45,126 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.balancer1.prob, batch_count=697000.0, ans=0.125 2023-05-11 16:18:52,646 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=256, metric=3.76 vs. limit=15.0 2023-05-11 16:19:07,494 INFO [train.py:1021] (0/2) Epoch 39, batch 1250, loss[loss=0.1549, simple_loss=0.2464, pruned_loss=0.03175, over 37033.00 frames. ], tot_loss[loss=0.1613, simple_loss=0.2547, pruned_loss=0.03399, over 7202777.13 frames. ], batch size: 99, lr: 2.81e-03, grad_scale: 32.0 2023-05-11 16:19:09,136 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder_embed.conv.8.prob, batch_count=697100.0, ans=0.125 2023-05-11 16:19:16,276 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.449e+02 3.080e+02 3.470e+02 4.238e+02 6.652e+02, threshold=6.940e+02, percent-clipped=0.0 2023-05-11 16:19:37,108 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.nonlin_attention.balancer.min_positive, batch_count=697200.0, ans=0.05 2023-05-11 16:20:05,748 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0062-62366-0 from training. Duration: 20.26 2023-05-11 16:20:07,443 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.1275, 3.8313, 3.5443, 3.8134, 3.2299, 2.9156, 3.3342, 2.8008], device='cuda:0') 2023-05-11 16:20:18,359 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0030-9324-0_sp0.9 from training. Duration: 21.3444375 2023-05-11 16:20:21,055 INFO [train.py:1021] (0/2) Epoch 39, batch 1300, loss[loss=0.1538, simple_loss=0.2486, pruned_loss=0.0295, over 36906.00 frames. ], tot_loss[loss=0.1621, simple_loss=0.2557, pruned_loss=0.03427, over 7194237.70 frames. ], batch size: 100, lr: 2.81e-03, grad_scale: 32.0 2023-05-11 16:20:31,536 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer2.prob, batch_count=697350.0, ans=0.125 2023-05-11 16:21:17,309 WARNING [train.py:1182] (0/2) Exclude cut with ID 497-129325-0061-62254-0_sp1.1 from training. Duration: 0.97725 2023-05-11 16:21:34,928 INFO [train.py:1021] (0/2) Epoch 39, batch 1350, loss[loss=0.168, simple_loss=0.2672, pruned_loss=0.0344, over 36050.00 frames. ], tot_loss[loss=0.1622, simple_loss=0.2558, pruned_loss=0.0343, over 7190139.04 frames. ], batch size: 133, lr: 2.81e-03, grad_scale: 32.0 2023-05-11 16:21:38,313 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=697600.0, ans=0.125 2023-05-11 16:21:44,146 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.468e+02 3.004e+02 3.567e+02 4.390e+02 7.441e+02, threshold=7.135e+02, percent-clipped=3.0 2023-05-11 16:21:58,953 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=697650.0, ans=0.125 2023-05-11 16:22:01,626 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0031-39906-0_sp0.9 from training. Duration: 22.97225 2023-05-11 16:22:07,726 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff2_skip_rate, batch_count=697700.0, ans=0.0 2023-05-11 16:22:14,960 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.balancer2.prob, batch_count=697700.0, ans=0.125 2023-05-11 16:22:19,322 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.balancer.max_positive, batch_count=697750.0, ans=0.95 2023-05-11 16:22:29,379 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 16:22:32,503 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0047-39922-0_sp0.9 from training. Duration: 21.97775 2023-05-11 16:22:37,740 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.ff2_skip_rate, batch_count=697800.0, ans=0.0 2023-05-11 16:22:41,827 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder_embed.convnext.out_balancer.prob, batch_count=697800.0, ans=0.125 2023-05-11 16:22:43,242 WARNING [train.py:1182] (0/2) Exclude cut with ID 1112-1043-0006-89194-0_sp0.9 from training. Duration: 21.8333125 2023-05-11 16:22:44,878 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.ff2_skip_rate, batch_count=697800.0, ans=0.0 2023-05-11 16:22:48,904 INFO [train.py:1021] (0/2) Epoch 39, batch 1400, loss[loss=0.173, simple_loss=0.2737, pruned_loss=0.03618, over 36826.00 frames. ], tot_loss[loss=0.1623, simple_loss=0.2556, pruned_loss=0.03447, over 7171901.60 frames. ], batch size: 111, lr: 2.81e-03, grad_scale: 32.0 2023-05-11 16:22:50,650 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.ff3_skip_rate, batch_count=697850.0, ans=0.0 2023-05-11 16:22:50,883 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=1.84 vs. limit=6.0 2023-05-11 16:22:56,239 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0031-94921-0 from training. Duration: 20.47 2023-05-11 16:23:16,818 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer2.prob, batch_count=697950.0, ans=0.125 2023-05-11 16:23:30,843 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module1.balancer2.prob, batch_count=697950.0, ans=0.125 2023-05-11 16:23:45,772 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=698000.0, ans=0.1 2023-05-11 16:23:54,630 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.balancer2.prob, batch_count=698050.0, ans=0.125 2023-05-11 16:23:55,066 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=5.08 vs. limit=15.0 2023-05-11 16:24:03,043 INFO [train.py:1021] (0/2) Epoch 39, batch 1450, loss[loss=0.1505, simple_loss=0.238, pruned_loss=0.03152, over 37082.00 frames. ], tot_loss[loss=0.1622, simple_loss=0.2557, pruned_loss=0.03436, over 7198295.87 frames. ], batch size: 88, lr: 2.81e-03, grad_scale: 32.0 2023-05-11 16:24:03,132 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0037-39912-0_sp0.9 from training. Duration: 20.67225 2023-05-11 16:24:11,786 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.664e+02 3.203e+02 3.673e+02 4.352e+02 7.687e+02, threshold=7.346e+02, percent-clipped=3.0 2023-05-11 16:24:24,699 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0024-94914-0_sp0.9 from training. Duration: 25.2444375 2023-05-11 16:24:32,364 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=698200.0, ans=0.125 2023-05-11 16:24:32,794 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=4.77 vs. limit=15.0 2023-05-11 16:24:48,087 WARNING [train.py:1182] (0/2) Exclude cut with ID 3340-169293-0021-76797-0_sp0.9 from training. Duration: 21.1445 2023-05-11 16:25:17,690 INFO [train.py:1021] (0/2) Epoch 39, batch 1500, loss[loss=0.1688, simple_loss=0.2658, pruned_loss=0.0359, over 32435.00 frames. ], tot_loss[loss=0.1618, simple_loss=0.2552, pruned_loss=0.03421, over 7207402.77 frames. ], batch size: 170, lr: 2.81e-03, grad_scale: 32.0 2023-05-11 16:25:18,568 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=698350.0, ans=0.125 2023-05-11 16:25:30,081 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=698350.0, ans=0.1 2023-05-11 16:25:48,963 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=698450.0, ans=0.125 2023-05-11 16:26:05,120 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0079-62383-0_sp0.9 from training. Duration: 33.038875 2023-05-11 16:26:31,299 INFO [train.py:1021] (0/2) Epoch 39, batch 1550, loss[loss=0.1584, simple_loss=0.2539, pruned_loss=0.03149, over 37010.00 frames. ], tot_loss[loss=0.1615, simple_loss=0.255, pruned_loss=0.03401, over 7216407.01 frames. ], batch size: 104, lr: 2.80e-03, grad_scale: 32.0 2023-05-11 16:26:33,428 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=3.29 vs. limit=10.0 2023-05-11 16:26:40,037 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.495e+02 3.022e+02 3.318e+02 4.218e+02 7.629e+02, threshold=6.636e+02, percent-clipped=1.0 2023-05-11 16:26:43,099 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.ff3_skip_rate, batch_count=698600.0, ans=0.0 2023-05-11 16:26:44,281 WARNING [train.py:1182] (0/2) Exclude cut with ID 6426-64291-0000-16059-0_sp0.9 from training. Duration: 20.0944375 2023-05-11 16:26:53,380 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=698650.0, ans=0.1 2023-05-11 16:27:00,743 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0044-62348-0_sp1.1 from training. Duration: 20.4 2023-05-11 16:27:07,291 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62851-0022-91297-0 from training. Duration: 20.085 2023-05-11 16:27:17,364 WARNING [train.py:1182] (0/2) Exclude cut with ID 4860-13185-0032-76709-0_sp0.9 from training. Duration: 23.07775 2023-05-11 16:27:17,536 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass.scale_min, batch_count=698750.0, ans=0.2 2023-05-11 16:27:45,062 INFO [train.py:1021] (0/2) Epoch 39, batch 1600, loss[loss=0.1711, simple_loss=0.2672, pruned_loss=0.03748, over 37120.00 frames. ], tot_loss[loss=0.1614, simple_loss=0.255, pruned_loss=0.03388, over 7223764.00 frames. ], batch size: 107, lr: 2.80e-03, grad_scale: 32.0 2023-05-11 16:27:50,523 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=4.26 vs. limit=12.0 2023-05-11 16:27:55,831 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer2.prob, batch_count=698850.0, ans=0.125 2023-05-11 16:27:56,209 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=7.96 vs. limit=22.5 2023-05-11 16:28:04,935 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0044-62348-0_sp0.9 from training. Duration: 24.9333125 2023-05-11 16:28:09,603 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 16:28:16,814 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module1.balancer1.prob, batch_count=698950.0, ans=0.125 2023-05-11 16:28:17,164 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=256, metric=5.19 vs. limit=15.0 2023-05-11 16:28:22,368 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=698950.0, ans=0.1 2023-05-11 16:28:34,853 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.self_attn1.whiten, num_groups=1, num_channels=192, metric=10.61 vs. limit=22.5 2023-05-11 16:28:41,981 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=699000.0, ans=0.1 2023-05-11 16:28:43,357 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=699050.0, ans=0.125 2023-05-11 16:28:43,361 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.attention_skip_rate, batch_count=699050.0, ans=0.0 2023-05-11 16:28:48,944 WARNING [train.py:1182] (0/2) Exclude cut with ID 5118-111612-0016-124680-0_sp0.9 from training. Duration: 20.388875 2023-05-11 16:28:49,639 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=3.85 vs. limit=12.0 2023-05-11 16:28:56,749 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0017-62487-0_sp1.1 from training. Duration: 20.3590625 2023-05-11 16:28:59,637 INFO [train.py:1021] (0/2) Epoch 39, batch 1650, loss[loss=0.1532, simple_loss=0.2389, pruned_loss=0.03376, over 36768.00 frames. ], tot_loss[loss=0.161, simple_loss=0.2545, pruned_loss=0.03377, over 7210587.64 frames. ], batch size: 89, lr: 2.80e-03, grad_scale: 32.0 2023-05-11 16:29:08,261 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.498e+02 3.081e+02 3.487e+02 4.126e+02 6.381e+02, threshold=6.974e+02, percent-clipped=0.0 2023-05-11 16:29:42,510 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=3.39 vs. limit=10.0 2023-05-11 16:30:03,021 WARNING [train.py:1182] (0/2) Exclude cut with ID 3557-8342-0013-54691-0_sp1.1 from training. Duration: 0.836375 2023-05-11 16:30:03,647 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=7.96 vs. limit=22.5 2023-05-11 16:30:13,357 INFO [train.py:1021] (0/2) Epoch 39, batch 1700, loss[loss=0.1493, simple_loss=0.2355, pruned_loss=0.03152, over 37170.00 frames. ], tot_loss[loss=0.1611, simple_loss=0.2542, pruned_loss=0.03397, over 7210443.07 frames. ], batch size: 93, lr: 2.80e-03, grad_scale: 32.0 2023-05-11 16:30:27,597 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.whiten.whitening_limit, batch_count=699400.0, ans=15.0 2023-05-11 16:30:28,579 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.ff3_skip_rate, batch_count=699400.0, ans=0.0 2023-05-11 16:30:36,175 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=5.91 vs. limit=15.0 2023-05-11 16:30:46,076 WARNING [train.py:1182] (0/2) Exclude cut with ID 8565-290391-0049-67394-0_sp0.9 from training. Duration: 21.3166875 2023-05-11 16:31:13,821 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.ff2_skip_rate, batch_count=699550.0, ans=0.0 2023-05-11 16:31:17,351 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([1.9220, 3.1932, 4.6546, 3.0773], device='cuda:0') 2023-05-11 16:31:18,484 WARNING [train.py:1182] (0/2) Exclude cut with ID 6533-399-0029-104863-0_sp0.9 from training. Duration: 22.1055625 2023-05-11 16:31:27,010 INFO [train.py:1021] (0/2) Epoch 39, batch 1750, loss[loss=0.1909, simple_loss=0.2798, pruned_loss=0.05102, over 36770.00 frames. ], tot_loss[loss=0.1626, simple_loss=0.2552, pruned_loss=0.035, over 7201183.18 frames. ], batch size: 122, lr: 2.80e-03, grad_scale: 32.0 2023-05-11 16:31:29,935 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0094-26379-0_sp1.1 from training. Duration: 21.77725 2023-05-11 16:31:36,487 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.620e+02 3.329e+02 3.726e+02 4.059e+02 5.941e+02, threshold=7.452e+02, percent-clipped=0.0 2023-05-11 16:31:49,440 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0005-134304-0_sp0.9 from training. Duration: 27.8166875 2023-05-11 16:32:14,944 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0021-15852-0_sp1.1 from training. Duration: 22.5090625 2023-05-11 16:32:22,323 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0005-134304-0 from training. Duration: 25.035 2023-05-11 16:32:38,802 WARNING [train.py:1182] (0/2) Exclude cut with ID 774-127930-0014-10412-0_sp1.1 from training. Duration: 0.95 2023-05-11 16:32:41,539 INFO [train.py:1021] (0/2) Epoch 39, batch 1800, loss[loss=0.1886, simple_loss=0.2768, pruned_loss=0.05019, over 36215.00 frames. ], tot_loss[loss=0.1636, simple_loss=0.2554, pruned_loss=0.03594, over 7187744.83 frames. ], batch size: 126, lr: 2.80e-03, grad_scale: 32.0 2023-05-11 16:32:50,605 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.balancer.max_positive, batch_count=699850.0, ans=0.95 2023-05-11 16:32:59,117 WARNING [train.py:1182] (0/2) Exclude cut with ID 3033-130750-0096-55598-0_sp0.9 from training. Duration: 0.92225 2023-05-11 16:33:14,294 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.0.self_attn_weights, attn_weights_entropy = tensor([6.1666, 5.4398, 5.3039, 5.9084], device='cuda:0') 2023-05-11 16:33:25,080 INFO [checkpoint.py:75] (0/2) Saving checkpoint to pruned_transducer_stateless7/exp1119-smaller-md1500/checkpoint-140000.pt 2023-05-11 16:33:30,836 WARNING [train.py:1182] (0/2) Exclude cut with ID 4511-76322-0006-80011-0 from training. Duration: 21.97 2023-05-11 16:33:32,506 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.5728, 4.8581, 5.0934, 4.7121], device='cuda:0') 2023-05-11 16:33:32,546 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=700000.0, ans=0.1 2023-05-11 16:33:49,704 WARNING [train.py:1182] (0/2) Exclude cut with ID 7492-105653-0055-62765-0_sp0.9 from training. Duration: 21.97225 2023-05-11 16:33:51,077 WARNING [train.py:1182] (0/2) Exclude cut with ID 453-131332-0000-47844-0_sp0.9 from training. Duration: 25.3333125 2023-05-11 16:33:57,376 INFO [train.py:1021] (0/2) Epoch 39, batch 1850, loss[loss=0.1571, simple_loss=0.2367, pruned_loss=0.03871, over 35412.00 frames. ], tot_loss[loss=0.164, simple_loss=0.2552, pruned_loss=0.03642, over 7217279.30 frames. ], batch size: 78, lr: 2.80e-03, grad_scale: 32.0 2023-05-11 16:34:03,455 WARNING [train.py:1182] (0/2) Exclude cut with ID 5172-29468-0015-19128-0_sp0.9 from training. Duration: 21.5055625 2023-05-11 16:34:06,195 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.895e+02 3.504e+02 3.921e+02 4.339e+02 8.189e+02, threshold=7.843e+02, percent-clipped=1.0 2023-05-11 16:34:08,050 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=700100.0, ans=0.1 2023-05-11 16:34:10,741 WARNING [train.py:1182] (0/2) Exclude cut with ID 453-131332-0000-47844-0_sp1.1 from training. Duration: 20.72725 2023-05-11 16:34:40,638 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer1.prob, batch_count=700250.0, ans=0.125 2023-05-11 16:34:46,257 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0030-130156-0_sp0.9 from training. Duration: 26.32775 2023-05-11 16:35:10,621 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([4.8807, 4.2569, 2.8415, 3.1833], device='cuda:0') 2023-05-11 16:35:11,005 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=3.77 vs. limit=10.0 2023-05-11 16:35:11,673 INFO [train.py:1021] (0/2) Epoch 39, batch 1900, loss[loss=0.1771, simple_loss=0.2683, pruned_loss=0.04294, over 36802.00 frames. ], tot_loss[loss=0.165, simple_loss=0.2558, pruned_loss=0.03712, over 7223702.48 frames. ], batch size: 111, lr: 2.80e-03, grad_scale: 32.0 2023-05-11 16:35:16,034 WARNING [train.py:1182] (0/2) Exclude cut with ID 3867-173237-0077-144769-0 from training. Duration: 20.025 2023-05-11 16:35:21,813 WARNING [train.py:1182] (0/2) Exclude cut with ID 6709-74022-0004-86860-0_sp1.1 from training. Duration: 0.9409375 2023-05-11 16:35:21,820 WARNING [train.py:1182] (0/2) Exclude cut with ID 4757-1811-0023-62229-0_sp0.9 from training. Duration: 21.37775 2023-05-11 16:35:32,383 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=700400.0, ans=0.0 2023-05-11 16:35:42,977 WARNING [train.py:1182] (0/2) Exclude cut with ID 1250-135782-0004-25974-0_sp0.9 from training. Duration: 21.17225 2023-05-11 16:35:42,993 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0021-15852-0_sp0.9 from training. Duration: 27.511125 2023-05-11 16:35:50,428 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.balancer2.prob, batch_count=700450.0, ans=0.125 2023-05-11 16:35:50,431 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=700450.0, ans=0.1 2023-05-11 16:35:52,307 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=8, num_channels=256, metric=3.67 vs. limit=6.0 2023-05-11 16:36:03,105 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=256, metric=5.53 vs. limit=15.0 2023-05-11 16:36:05,875 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=5.67 vs. limit=15.0 2023-05-11 16:36:10,533 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=11.58 vs. limit=15.0 2023-05-11 16:36:12,937 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([3.2205, 4.1800, 4.8408, 5.0026], device='cuda:0') 2023-05-11 16:36:18,200 WARNING [train.py:1182] (0/2) Exclude cut with ID 453-131332-0000-47844-0 from training. Duration: 22.8 2023-05-11 16:36:22,455 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0040-44509-0 from training. Duration: 22.585 2023-05-11 16:36:25,312 INFO [train.py:1021] (0/2) Epoch 39, batch 1950, loss[loss=0.1557, simple_loss=0.2408, pruned_loss=0.03526, over 36768.00 frames. ], tot_loss[loss=0.1656, simple_loss=0.2558, pruned_loss=0.0377, over 7190133.03 frames. ], batch size: 89, lr: 2.80e-03, grad_scale: 32.0 2023-05-11 16:36:28,474 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff3_skip_rate, batch_count=700600.0, ans=0.0 2023-05-11 16:36:34,566 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.981e+02 3.439e+02 3.802e+02 4.383e+02 6.938e+02, threshold=7.604e+02, percent-clipped=0.0 2023-05-11 16:36:35,286 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=3.74 vs. limit=12.0 2023-05-11 16:36:51,058 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0001-146967-0_sp0.9 from training. Duration: 22.0166875 2023-05-11 16:37:04,111 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=700700.0, ans=0.1 2023-05-11 16:37:06,677 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0003-134302-0_sp1.1 from training. Duration: 24.395375 2023-05-11 16:37:14,087 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0017-41203-0_sp0.9 from training. Duration: 27.47775 2023-05-11 16:37:18,403 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0017-62487-0_sp0.9 from training. Duration: 24.8833125 2023-05-11 16:37:21,848 WARNING [train.py:1182] (0/2) Exclude cut with ID 6758-72288-0033-108368-0 from training. Duration: 23.39 2023-05-11 16:37:27,568 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0007-12994-0_sp0.9 from training. Duration: 28.72225 2023-05-11 16:37:36,330 WARNING [train.py:1182] (0/2) Exclude cut with ID 585-294811-0110-133686-0_sp0.9 from training. Duration: 20.8944375 2023-05-11 16:37:39,271 INFO [train.py:1021] (0/2) Epoch 39, batch 2000, loss[loss=0.1509, simple_loss=0.2302, pruned_loss=0.03578, over 36952.00 frames. ], tot_loss[loss=0.1665, simple_loss=0.2561, pruned_loss=0.03847, over 7190496.35 frames. ], batch size: 86, lr: 2.80e-03, grad_scale: 32.0 2023-05-11 16:37:41,504 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff2_skip_rate, batch_count=700850.0, ans=0.0 2023-05-11 16:37:53,108 WARNING [train.py:1182] (0/2) Exclude cut with ID 5796-66357-0007-116447-0_sp0.9 from training. Duration: 23.8444375 2023-05-11 16:37:56,307 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer2.prob, batch_count=700900.0, ans=0.125 2023-05-11 16:37:59,117 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module1.balancer1.prob, batch_count=700900.0, ans=0.125 2023-05-11 16:38:02,617 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=256, metric=9.61 vs. limit=22.5 2023-05-11 16:38:09,364 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.0.self_attn_weights, attn_weights_entropy = tensor([6.2959, 5.6566, 5.4600, 6.0628], device='cuda:0') 2023-05-11 16:38:15,341 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0007-12994-0 from training. Duration: 25.85 2023-05-11 16:38:15,350 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0023-13010-0 from training. Duration: 21.39 2023-05-11 16:38:25,781 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer1.prob, batch_count=701000.0, ans=0.125 2023-05-11 16:38:27,074 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0014-15845-0 from training. Duration: 27.92 2023-05-11 16:38:33,862 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.balancer1.prob, batch_count=701000.0, ans=0.125 2023-05-11 16:38:39,530 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([3.2278, 3.6301, 3.5045, 4.3049, 2.6239, 3.7540, 4.3133, 3.7605], device='cuda:0') 2023-05-11 16:38:53,660 INFO [train.py:1021] (0/2) Epoch 39, batch 2050, loss[loss=0.179, simple_loss=0.2681, pruned_loss=0.04491, over 36854.00 frames. ], tot_loss[loss=0.1667, simple_loss=0.256, pruned_loss=0.03872, over 7172264.36 frames. ], batch size: 113, lr: 2.80e-03, grad_scale: 32.0 2023-05-11 16:38:53,734 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0039-130165-0_sp0.9 from training. Duration: 20.661125 2023-05-11 16:39:02,789 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.757e+02 3.603e+02 4.115e+02 4.533e+02 6.851e+02, threshold=8.229e+02, percent-clipped=0.0 2023-05-11 16:39:16,214 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0043-15874-0_sp0.9 from training. Duration: 20.07225 2023-05-11 16:39:19,422 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=701150.0, ans=0.125 2023-05-11 16:39:24,095 WARNING [train.py:1182] (0/2) Exclude cut with ID 1085-156170-0017-128270-0 from training. Duration: 21.01 2023-05-11 16:39:53,746 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=701300.0, ans=0.0 2023-05-11 16:40:07,030 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=7.71 vs. limit=15.0 2023-05-11 16:40:07,856 INFO [train.py:1021] (0/2) Epoch 39, batch 2100, loss[loss=0.1435, simple_loss=0.2216, pruned_loss=0.03272, over 35824.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2563, pruned_loss=0.0392, over 7164194.82 frames. ], batch size: 79, lr: 2.80e-03, grad_scale: 16.0 2023-05-11 16:40:08,852 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.conv_module1.whiten, num_groups=1, num_channels=192, metric=8.94 vs. limit=15.0 2023-05-11 16:40:11,211 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.7692, 5.0225, 5.2932, 4.9431], device='cuda:0') 2023-05-11 16:40:24,680 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer1.prob, batch_count=701400.0, ans=0.125 2023-05-11 16:40:26,462 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.9246, 3.9858, 4.6572, 4.8023], device='cuda:0') 2023-05-11 16:40:30,532 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.balancer1.prob, batch_count=701400.0, ans=0.125 2023-05-11 16:40:33,193 WARNING [train.py:1182] (0/2) Exclude cut with ID 2195-150901-0045-59933-0 from training. Duration: 20.65 2023-05-11 16:40:39,317 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer1.prob, batch_count=701450.0, ans=0.125 2023-05-11 16:40:41,078 WARNING [train.py:1182] (0/2) Exclude cut with ID 5796-66357-0007-116447-0 from training. Duration: 21.46 2023-05-11 16:40:41,283 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=701450.0, ans=0.125 2023-05-11 16:40:44,288 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.ff2_skip_rate, batch_count=701450.0, ans=0.0 2023-05-11 16:40:47,148 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=701450.0, ans=0.125 2023-05-11 16:40:52,870 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff2_skip_rate, batch_count=701500.0, ans=0.0 2023-05-11 16:41:04,569 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_skip_rate, batch_count=701500.0, ans=0.0 2023-05-11 16:41:22,286 INFO [train.py:1021] (0/2) Epoch 39, batch 2150, loss[loss=0.1536, simple_loss=0.2374, pruned_loss=0.03488, over 37193.00 frames. ], tot_loss[loss=0.1675, simple_loss=0.2561, pruned_loss=0.03944, over 7179085.79 frames. ], batch size: 93, lr: 2.80e-03, grad_scale: 16.0 2023-05-11 16:41:28,209 WARNING [train.py:1182] (0/2) Exclude cut with ID 3557-8342-0013-54691-0 from training. Duration: 0.92 2023-05-11 16:41:31,473 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.bypass.skip_rate, batch_count=701600.0, ans=0.09899494936611666 2023-05-11 16:41:33,072 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.719e+02 3.424e+02 3.603e+02 4.085e+02 5.934e+02, threshold=7.206e+02, percent-clipped=0.0 2023-05-11 16:41:34,616 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0023-13010-0_sp0.9 from training. Duration: 23.7666875 2023-05-11 16:41:59,976 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=701700.0, ans=0.125 2023-05-11 16:42:11,207 WARNING [train.py:1182] (0/2) Exclude cut with ID 8544-281189-0060-101339-0_sp0.9 from training. Duration: 20.861125 2023-05-11 16:42:21,486 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-65654-0031-41259-0_sp0.9 from training. Duration: 22.711125 2023-05-11 16:42:33,394 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.ff3_skip_rate, batch_count=701800.0, ans=0.0 2023-05-11 16:42:35,927 INFO [train.py:1021] (0/2) Epoch 39, batch 2200, loss[loss=0.1614, simple_loss=0.2407, pruned_loss=0.04107, over 36850.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.256, pruned_loss=0.03958, over 7148437.47 frames. ], batch size: 89, lr: 2.80e-03, grad_scale: 16.0 2023-05-11 16:42:45,061 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer1.max_abs, batch_count=701850.0, ans=10.0 2023-05-11 16:42:52,776 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=701900.0, ans=0.1 2023-05-11 16:43:02,642 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0043-132310-0_sp1.1 from training. Duration: 22.986375 2023-05-11 16:43:19,831 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=3.25 vs. limit=10.0 2023-05-11 16:43:22,128 WARNING [train.py:1182] (0/2) Exclude cut with ID 8040-260924-0003-80960-0_sp0.9 from training. Duration: 22.07225 2023-05-11 16:43:25,157 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0045-26330-0_sp0.9 from training. Duration: 20.3055625 2023-05-11 16:43:28,033 WARNING [train.py:1182] (0/2) Exclude cut with ID 6356-271890-0060-94317-0_sp0.9 from training. Duration: 20.72225 2023-05-11 16:43:28,293 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer1.min_positive, batch_count=702000.0, ans=0.025 2023-05-11 16:43:31,045 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 16:43:37,120 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=4.52 vs. limit=15.0 2023-05-11 16:43:38,215 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff3_skip_rate, batch_count=702050.0, ans=0.0 2023-05-11 16:43:41,667 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=192, metric=11.40 vs. limit=15.0 2023-05-11 16:43:45,601 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0017-41203-0_sp1.1 from training. Duration: 22.4818125 2023-05-11 16:43:48,941 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=702100.0, ans=0.125 2023-05-11 16:43:49,985 INFO [train.py:1021] (0/2) Epoch 39, batch 2250, loss[loss=0.1653, simple_loss=0.2607, pruned_loss=0.03498, over 37003.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2563, pruned_loss=0.03989, over 7150968.96 frames. ], batch size: 104, lr: 2.80e-03, grad_scale: 16.0 2023-05-11 16:43:51,764 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass.scale_min, batch_count=702100.0, ans=0.2 2023-05-11 16:44:00,109 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.909e+02 3.506e+02 3.848e+02 4.467e+02 7.273e+02, threshold=7.695e+02, percent-clipped=1.0 2023-05-11 16:44:10,952 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0040-44509-0_sp0.9 from training. Duration: 25.0944375 2023-05-11 16:44:15,323 WARNING [train.py:1182] (0/2) Exclude cut with ID 6533-399-0047-104881-0 from training. Duration: 21.515 2023-05-11 16:44:18,593 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.0132, 4.2646, 4.5485, 4.5760], device='cuda:0') 2023-05-11 16:44:22,687 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0009-15840-0_sp0.9 from training. Duration: 27.02225 2023-05-11 16:44:28,706 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0010-62480-0_sp0.9 from training. Duration: 22.22225 2023-05-11 16:44:34,561 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0085-44554-0_sp0.9 from training. Duration: 20.85 2023-05-11 16:45:04,491 INFO [train.py:1021] (0/2) Epoch 39, batch 2300, loss[loss=0.1561, simple_loss=0.2398, pruned_loss=0.0362, over 36942.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.256, pruned_loss=0.03994, over 7145076.44 frames. ], batch size: 91, lr: 2.80e-03, grad_scale: 16.0 2023-05-11 16:45:08,997 WARNING [train.py:1182] (0/2) Exclude cut with ID 4295-39940-0007-92567-0 from training. Duration: 21.54 2023-05-11 16:45:13,418 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0040-44509-0_sp1.1 from training. Duration: 20.5318125 2023-05-11 16:45:14,436 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=192, metric=5.28 vs. limit=15.0 2023-05-11 16:45:23,479 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0012-134311-0_sp0.9 from training. Duration: 21.9333125 2023-05-11 16:45:31,687 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([4.3353, 3.7730, 3.7221, 3.9967], device='cuda:0') 2023-05-11 16:46:02,977 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=144, metric=8.77 vs. limit=10.0 2023-05-11 16:46:11,758 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0025-130151-0_sp0.9 from training. Duration: 21.7944375 2023-05-11 16:46:12,371 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=3.18 vs. limit=15.0 2023-05-11 16:46:17,715 INFO [train.py:1021] (0/2) Epoch 39, batch 2350, loss[loss=0.185, simple_loss=0.2721, pruned_loss=0.04892, over 36781.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2562, pruned_loss=0.03997, over 7162385.11 frames. ], batch size: 122, lr: 2.80e-03, grad_scale: 16.0 2023-05-11 16:46:25,637 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0002-12989-0_sp0.9 from training. Duration: 22.4666875 2023-05-11 16:46:28,315 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 3.037e+02 3.566e+02 4.059e+02 4.997e+02 8.275e+02, threshold=8.119e+02, percent-clipped=1.0 2023-05-11 16:46:32,771 WARNING [train.py:1182] (0/2) Exclude cut with ID 6121-9014-0076-24124-0 from training. Duration: 21.635 2023-05-11 16:46:38,574 WARNING [train.py:1182] (0/2) Exclude cut with ID 6121-9014-0076-24124-0_sp0.9 from training. Duration: 24.038875 2023-05-11 16:46:42,215 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.attention_skip_rate, batch_count=702650.0, ans=0.0 2023-05-11 16:46:48,097 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module2.balancer1.prob, batch_count=702700.0, ans=0.125 2023-05-11 16:47:08,150 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([6.1613, 5.3561, 5.4767, 6.0475], device='cuda:0') 2023-05-11 16:47:17,665 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_skip_rate, batch_count=702800.0, ans=0.0 2023-05-11 16:47:18,849 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0009-134308-0_sp1.1 from training. Duration: 21.786375 2023-05-11 16:47:24,684 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=702800.0, ans=0.1 2023-05-11 16:47:30,918 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0002-12989-0 from training. Duration: 20.22 2023-05-11 16:47:32,234 INFO [train.py:1021] (0/2) Epoch 39, batch 2400, loss[loss=0.1733, simple_loss=0.2682, pruned_loss=0.03919, over 37087.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.2552, pruned_loss=0.03964, over 7132694.21 frames. ], batch size: 110, lr: 2.80e-03, grad_scale: 32.0 2023-05-11 16:47:48,293 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=702900.0, ans=0.125 2023-05-11 16:47:56,934 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=702900.0, ans=0.1 2023-05-11 16:48:38,536 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([4.3817, 3.7598, 3.9569, 3.8093], device='cuda:0') 2023-05-11 16:48:45,675 INFO [train.py:1021] (0/2) Epoch 39, batch 2450, loss[loss=0.1577, simple_loss=0.233, pruned_loss=0.04119, over 36950.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2548, pruned_loss=0.03967, over 7149496.45 frames. ], batch size: 86, lr: 2.80e-03, grad_scale: 32.0 2023-05-11 16:48:55,692 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.897e+02 3.542e+02 3.978e+02 4.785e+02 7.054e+02, threshold=7.955e+02, percent-clipped=0.0 2023-05-11 16:49:05,301 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=703150.0, ans=0.1 2023-05-11 16:49:27,337 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_skip_rate, batch_count=703200.0, ans=0.0 2023-05-11 16:49:30,303 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.balancer2.prob, batch_count=703250.0, ans=0.125 2023-05-11 16:49:32,994 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0043-132310-0 from training. Duration: 25.285 2023-05-11 16:49:51,331 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=256, metric=8.15 vs. limit=22.5 2023-05-11 16:49:59,900 INFO [train.py:1021] (0/2) Epoch 39, batch 2500, loss[loss=0.2149, simple_loss=0.2919, pruned_loss=0.06891, over 25071.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2554, pruned_loss=0.03987, over 7119264.31 frames. ], batch size: 234, lr: 2.80e-03, grad_scale: 32.0 2023-05-11 16:50:19,171 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.scale_min, batch_count=703400.0, ans=0.2 2023-05-11 16:50:39,437 WARNING [train.py:1182] (0/2) Exclude cut with ID 811-130148-0001-63453-0_sp0.9 from training. Duration: 20.861125 2023-05-11 16:50:42,107 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=256, metric=8.64 vs. limit=22.5 2023-05-11 16:51:03,765 WARNING [train.py:1182] (0/2) Exclude cut with ID 6010-56788-0055-90261-0 from training. Duration: 20.88 2023-05-11 16:51:04,103 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=703550.0, ans=0.125 2023-05-11 16:51:07,123 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.whiten.whitening_limit, batch_count=703550.0, ans=15.0 2023-05-11 16:51:13,809 INFO [train.py:1021] (0/2) Epoch 39, batch 2550, loss[loss=0.1517, simple_loss=0.2325, pruned_loss=0.03549, over 36776.00 frames. ], tot_loss[loss=0.1668, simple_loss=0.2544, pruned_loss=0.03962, over 7130229.98 frames. ], batch size: 89, lr: 2.79e-03, grad_scale: 32.0 2023-05-11 16:51:18,508 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer1.prob, batch_count=703600.0, ans=0.125 2023-05-11 16:51:18,587 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module2.balancer1.prob, batch_count=703600.0, ans=0.125 2023-05-11 16:51:19,897 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.attention_skip_rate, batch_count=703600.0, ans=0.0 2023-05-11 16:51:23,810 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.836e+02 3.550e+02 4.095e+02 4.651e+02 7.064e+02, threshold=8.190e+02, percent-clipped=0.0 2023-05-11 16:51:33,504 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module1.balancer2.min_abs, batch_count=703650.0, ans=0.5 2023-05-11 16:51:36,235 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0045-15876-0_sp0.9 from training. Duration: 23.4166875 2023-05-11 16:51:39,848 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=6.52 vs. limit=15.0 2023-05-11 16:51:41,207 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=7.60 vs. limit=15.0 2023-05-11 16:52:01,460 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=703750.0, ans=0.1 2023-05-11 16:52:14,472 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_skip_rate, batch_count=703800.0, ans=0.0 2023-05-11 16:52:23,101 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.const_attention_rate, batch_count=703800.0, ans=0.025 2023-05-11 16:52:27,553 INFO [train.py:1021] (0/2) Epoch 39, batch 2600, loss[loss=0.1714, simple_loss=0.2638, pruned_loss=0.03943, over 36882.00 frames. ], tot_loss[loss=0.167, simple_loss=0.2546, pruned_loss=0.03973, over 7097129.52 frames. ], batch size: 105, lr: 2.79e-03, grad_scale: 32.0 2023-05-11 16:52:41,380 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([6.0190, 5.1959, 5.2956, 5.8621], device='cuda:0') 2023-05-11 16:52:52,633 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0006-134305-0 from training. Duration: 21.24 2023-05-11 16:52:52,646 WARNING [train.py:1182] (0/2) Exclude cut with ID 6533-399-0047-104881-0_sp0.9 from training. Duration: 23.9055625 2023-05-11 16:53:25,428 WARNING [train.py:1182] (0/2) Exclude cut with ID 6758-72288-0033-108368-0_sp0.9 from training. Duration: 25.988875 2023-05-11 16:53:32,645 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0001-134300-0_sp0.9 from training. Duration: 20.67225 2023-05-11 16:53:37,092 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.5461, 4.8886, 5.0141, 4.7316], device='cuda:0') 2023-05-11 16:53:38,520 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer1.prob, batch_count=704050.0, ans=0.125 2023-05-11 16:53:41,019 INFO [train.py:1021] (0/2) Epoch 39, batch 2650, loss[loss=0.1758, simple_loss=0.2685, pruned_loss=0.04158, over 36741.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2557, pruned_loss=0.04021, over 7089286.66 frames. ], batch size: 118, lr: 2.79e-03, grad_scale: 32.0 2023-05-11 16:53:49,024 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer2.prob, batch_count=704100.0, ans=0.125 2023-05-11 16:53:51,472 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.906e+02 3.419e+02 3.844e+02 4.498e+02 9.469e+02, threshold=7.689e+02, percent-clipped=1.0 2023-05-11 16:54:11,350 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.balancer.max_positive, batch_count=704200.0, ans=0.95 2023-05-11 16:54:22,065 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=704200.0, ans=0.125 2023-05-11 16:54:24,742 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0038-41224-0 from training. Duration: 20.34 2023-05-11 16:54:55,104 INFO [train.py:1021] (0/2) Epoch 39, batch 2700, loss[loss=0.1724, simple_loss=0.2623, pruned_loss=0.04122, over 37016.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.2549, pruned_loss=0.03974, over 7131989.97 frames. ], batch size: 104, lr: 2.79e-03, grad_scale: 32.0 2023-05-11 16:55:38,115 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0026-15857-0_sp0.9 from training. Duration: 25.061125 2023-05-11 16:55:49,549 WARNING [train.py:1182] (0/2) Exclude cut with ID 3033-130750-0096-55598-0 from training. Duration: 0.83 2023-05-11 16:55:53,401 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=704550.0, ans=0.125 2023-05-11 16:56:09,661 INFO [train.py:1021] (0/2) Epoch 39, batch 2750, loss[loss=0.1646, simple_loss=0.2535, pruned_loss=0.03782, over 36902.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2561, pruned_loss=0.03997, over 7120106.99 frames. ], batch size: 100, lr: 2.79e-03, grad_scale: 32.0 2023-05-11 16:56:12,910 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=704600.0, ans=0.1 2023-05-11 16:56:14,799 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=256, metric=9.38 vs. limit=22.5 2023-05-11 16:56:15,483 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0017-41203-0 from training. Duration: 24.73 2023-05-11 16:56:19,762 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.823e+02 3.497e+02 3.921e+02 4.505e+02 8.126e+02, threshold=7.841e+02, percent-clipped=2.0 2023-05-11 16:56:28,798 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0009-134308-0 from training. Duration: 23.965 2023-05-11 16:56:30,874 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=3.65 vs. limit=12.0 2023-05-11 16:56:37,389 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0030-146996-0_sp0.9 from training. Duration: 22.088875 2023-05-11 16:56:54,337 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0006-134305-0_sp0.9 from training. Duration: 23.6 2023-05-11 16:57:22,870 INFO [train.py:1021] (0/2) Epoch 39, batch 2800, loss[loss=0.1721, simple_loss=0.2628, pruned_loss=0.04071, over 36816.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2559, pruned_loss=0.03991, over 7119710.68 frames. ], batch size: 118, lr: 2.79e-03, grad_scale: 32.0 2023-05-11 16:57:27,491 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=704850.0, ans=0.125 2023-05-11 16:57:53,365 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=704950.0, ans=0.1 2023-05-11 16:58:28,668 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer2.min_abs, batch_count=705050.0, ans=0.5 2023-05-11 16:58:33,427 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=705050.0, ans=0.1 2023-05-11 16:58:36,095 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0024-13011-0 from training. Duration: 23.795 2023-05-11 16:58:37,508 INFO [train.py:1021] (0/2) Epoch 39, batch 2850, loss[loss=0.1764, simple_loss=0.2673, pruned_loss=0.04277, over 36770.00 frames. ], tot_loss[loss=0.167, simple_loss=0.255, pruned_loss=0.03953, over 7148319.38 frames. ], batch size: 118, lr: 2.79e-03, grad_scale: 32.0 2023-05-11 16:58:47,609 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.875e+02 3.400e+02 3.772e+02 4.210e+02 5.696e+02, threshold=7.544e+02, percent-clipped=0.0 2023-05-11 16:58:52,055 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0030-130156-0_sp1.1 from training. Duration: 21.5409375 2023-05-11 16:58:54,989 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0019-146985-0_sp0.9 from training. Duration: 24.97775 2023-05-11 16:59:05,273 WARNING [train.py:1182] (0/2) Exclude cut with ID 1085-156170-0017-128270-0_sp0.9 from training. Duration: 23.3444375 2023-05-11 16:59:08,340 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.balancer1.prob, batch_count=705200.0, ans=0.125 2023-05-11 16:59:11,626 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=10.53 vs. limit=12.0 2023-05-11 16:59:33,865 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer1.prob, batch_count=705250.0, ans=0.125 2023-05-11 16:59:36,589 WARNING [train.py:1182] (0/2) Exclude cut with ID 6010-56788-0055-90261-0_sp0.9 from training. Duration: 23.2 2023-05-11 16:59:42,449 WARNING [train.py:1182] (0/2) Exclude cut with ID 5653-46179-0060-117930-0_sp0.9 from training. Duration: 21.17225 2023-05-11 16:59:50,956 INFO [train.py:1021] (0/2) Epoch 39, batch 2900, loss[loss=0.1554, simple_loss=0.2373, pruned_loss=0.03676, over 36954.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2549, pruned_loss=0.03962, over 7142644.94 frames. ], batch size: 95, lr: 2.79e-03, grad_scale: 32.0 2023-05-11 17:00:01,409 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass.scale_min, batch_count=705350.0, ans=0.2 2023-05-11 17:00:02,562 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0008-134307-0_sp0.9 from training. Duration: 24.6555625 2023-05-11 17:00:02,923 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.5281, 3.7749, 4.2194, 3.8508], device='cuda:0') 2023-05-11 17:00:08,731 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.3776, 3.3696, 3.1435, 3.9502, 2.5720, 3.4360, 3.9871, 3.4851], device='cuda:0') 2023-05-11 17:00:08,770 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.balancer1.prob, batch_count=705400.0, ans=0.125 2023-05-11 17:00:12,149 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.0631, 2.8487, 4.4916, 3.2802], device='cuda:0') 2023-05-11 17:00:34,662 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.self_attn_weights.whiten_keys.whitening_limit, batch_count=705500.0, ans=6.0 2023-05-11 17:00:35,486 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer2.prob, batch_count=705500.0, ans=0.125 2023-05-11 17:00:46,874 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder_embed.conv.8.prob, batch_count=705500.0, ans=0.125 2023-05-11 17:00:47,000 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer1.prob, batch_count=705500.0, ans=0.125 2023-05-11 17:00:57,139 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-65654-0031-41259-0 from training. Duration: 20.44 2023-05-11 17:01:05,449 INFO [train.py:1021] (0/2) Epoch 39, batch 2950, loss[loss=0.1576, simple_loss=0.2443, pruned_loss=0.03549, over 37145.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2543, pruned_loss=0.03945, over 7136880.97 frames. ], batch size: 98, lr: 2.79e-03, grad_scale: 16.0 2023-05-11 17:01:11,289 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0018-132285-0_sp0.9 from training. Duration: 23.45 2023-05-11 17:01:16,872 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.877e+02 3.521e+02 3.968e+02 4.656e+02 8.421e+02, threshold=7.937e+02, percent-clipped=2.0 2023-05-11 17:01:34,629 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.2.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 17:01:41,742 WARNING [train.py:1182] (0/2) Exclude cut with ID 6945-60535-0076-12784-0_sp0.9 from training. Duration: 20.52225 2023-05-11 17:01:43,453 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=705700.0, ans=0.1 2023-05-11 17:01:49,043 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0008-134307-0 from training. Duration: 22.19 2023-05-11 17:01:58,782 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0014-15845-0_sp1.1 from training. Duration: 25.3818125 2023-05-11 17:02:17,362 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0043-132310-0_sp0.9 from training. Duration: 28.0944375 2023-05-11 17:02:18,804 INFO [train.py:1021] (0/2) Epoch 39, batch 3000, loss[loss=0.1761, simple_loss=0.2687, pruned_loss=0.04179, over 37006.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2551, pruned_loss=0.03978, over 7103673.05 frames. ], batch size: 116, lr: 2.79e-03, grad_scale: 16.0 2023-05-11 17:02:18,804 INFO [train.py:1048] (0/2) Computing validation loss 2023-05-11 17:02:31,852 INFO [train.py:1057] (0/2) Epoch 39, validation: loss=0.1515, simple_loss=0.2517, pruned_loss=0.02567, over 944034.00 frames. 2023-05-11 17:02:31,853 INFO [train.py:1058] (0/2) Maximum memory allocated so far is 18788MB 2023-05-11 17:02:36,948 WARNING [train.py:1182] (0/2) Exclude cut with ID 2195-150901-0045-59933-0_sp0.9 from training. Duration: 22.9444375 2023-05-11 17:02:43,401 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0024-13011-0_sp1.1 from training. Duration: 21.6318125 2023-05-11 17:02:51,227 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=256, metric=4.72 vs. limit=15.0 2023-05-11 17:03:00,791 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0030-130156-0 from training. Duration: 23.695 2023-05-11 17:03:29,024 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0094-26379-0 from training. Duration: 23.955 2023-05-11 17:03:35,014 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=4.24 vs. limit=6.0 2023-05-11 17:03:47,497 INFO [train.py:1021] (0/2) Epoch 39, batch 3050, loss[loss=0.1798, simple_loss=0.2673, pruned_loss=0.04616, over 37062.00 frames. ], tot_loss[loss=0.1669, simple_loss=0.2547, pruned_loss=0.03954, over 7104599.64 frames. ], batch size: 116, lr: 2.79e-03, grad_scale: 16.0 2023-05-11 17:03:52,316 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=706100.0, ans=0.1 2023-05-11 17:03:58,971 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.964e+02 3.399e+02 3.655e+02 4.162e+02 6.807e+02, threshold=7.310e+02, percent-clipped=0.0 2023-05-11 17:04:04,828 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0024-13011-0_sp0.9 from training. Duration: 26.438875 2023-05-11 17:04:07,955 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.nonlin_attention.balancer.prob, batch_count=706150.0, ans=0.125 2023-05-11 17:04:09,462 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 17:04:29,202 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([6.1658, 5.3719, 5.4536, 6.0183], device='cuda:0') 2023-05-11 17:04:32,252 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.balancer1.prob, batch_count=706250.0, ans=0.125 2023-05-11 17:04:51,891 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0021-26306-0_sp0.9 from training. Duration: 21.2444375 2023-05-11 17:04:51,923 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0014-15845-0_sp0.9 from training. Duration: 31.02225 2023-05-11 17:04:52,228 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.0621, 4.3348, 4.6295, 4.6576], device='cuda:0') 2023-05-11 17:04:52,294 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass.skip_rate, batch_count=706300.0, ans=0.04949747468305833 2023-05-11 17:05:00,525 INFO [train.py:1021] (0/2) Epoch 39, batch 3100, loss[loss=0.1608, simple_loss=0.2461, pruned_loss=0.03773, over 37143.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2551, pruned_loss=0.03962, over 7118837.00 frames. ], batch size: 98, lr: 2.79e-03, grad_scale: 16.0 2023-05-11 17:05:03,507 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0017-62487-0 from training. Duration: 22.395 2023-05-11 17:05:08,727 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.self_attn1.whiten, num_groups=1, num_channels=192, metric=11.62 vs. limit=22.5 2023-05-11 17:05:09,542 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=706350.0, ans=0.1 2023-05-11 17:05:09,575 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=706350.0, ans=0.1 2023-05-11 17:05:16,538 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.attention_skip_rate, batch_count=706400.0, ans=0.0 2023-05-11 17:05:19,180 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0045-15876-0 from training. Duration: 21.075 2023-05-11 17:05:24,883 WARNING [train.py:1182] (0/2) Exclude cut with ID 6482-98857-0025-147532-0_sp0.9 from training. Duration: 20.0055625 2023-05-11 17:05:24,901 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0037-132304-0_sp0.9 from training. Duration: 22.05 2023-05-11 17:05:24,909 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0003-134302-0 from training. Duration: 26.8349375 2023-05-11 17:05:28,586 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=256, metric=12.22 vs. limit=22.5 2023-05-11 17:05:29,209 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0009-15840-0_sp1.1 from training. Duration: 22.1090625 2023-05-11 17:05:35,112 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0094-26379-0_sp0.9 from training. Duration: 26.6166875 2023-05-11 17:05:53,780 WARNING [train.py:1182] (0/2) Exclude cut with ID 2046-178027-0000-53705-0_sp0.9 from training. Duration: 20.3055625 2023-05-11 17:06:15,002 INFO [train.py:1021] (0/2) Epoch 39, batch 3150, loss[loss=0.1628, simple_loss=0.2461, pruned_loss=0.03973, over 37164.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.255, pruned_loss=0.03966, over 7125324.40 frames. ], batch size: 93, lr: 2.79e-03, grad_scale: 16.0 2023-05-11 17:06:16,649 WARNING [train.py:1182] (0/2) Exclude cut with ID 7205-50138-0008-5373-0_sp0.9 from training. Duration: 20.7 2023-05-11 17:06:26,841 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 3.028e+02 3.500e+02 3.945e+02 4.566e+02 9.075e+02, threshold=7.890e+02, percent-clipped=1.0 2023-05-11 17:06:40,281 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module1.balancer2.prob, batch_count=706650.0, ans=0.125 2023-05-11 17:06:52,359 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer1.prob, batch_count=706700.0, ans=0.125 2023-05-11 17:06:55,177 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.1494, 4.5355, 3.2081, 3.0965], device='cuda:0') 2023-05-11 17:06:57,830 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0019-146985-0 from training. Duration: 22.48 2023-05-11 17:07:14,460 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0003-134302-0_sp0.9 from training. Duration: 29.816625 2023-05-11 17:07:28,891 INFO [train.py:1021] (0/2) Epoch 39, batch 3200, loss[loss=0.1652, simple_loss=0.2584, pruned_loss=0.03597, over 36916.00 frames. ], tot_loss[loss=0.1674, simple_loss=0.2552, pruned_loss=0.03976, over 7109124.68 frames. ], batch size: 105, lr: 2.79e-03, grad_scale: 32.0 2023-05-11 17:07:32,600 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=3.46 vs. limit=6.0 2023-05-11 17:07:34,738 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0005-134304-0_sp1.1 from training. Duration: 22.7590625 2023-05-11 17:07:40,581 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0026-15857-0 from training. Duration: 22.555 2023-05-11 17:07:55,942 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass.skip_rate, batch_count=706900.0, ans=0.04949747468305833 2023-05-11 17:08:01,318 WARNING [train.py:1182] (0/2) Exclude cut with ID 1250-135782-0005-25975-0_sp0.9 from training. Duration: 21.688875 2023-05-11 17:08:03,158 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer1.prob, batch_count=706950.0, ans=0.125 2023-05-11 17:08:10,382 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module2.balancer2.prob, batch_count=706950.0, ans=0.125 2023-05-11 17:08:12,004 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer1.prob, batch_count=707000.0, ans=0.125 2023-05-11 17:08:31,675 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=192, metric=9.12 vs. limit=15.0 2023-05-11 17:08:38,791 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0038-41224-0_sp0.9 from training. Duration: 22.6 2023-05-11 17:08:41,965 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass_mid.scale_min, batch_count=707100.0, ans=0.2 2023-05-11 17:08:43,115 INFO [train.py:1021] (0/2) Epoch 39, batch 3250, loss[loss=0.1626, simple_loss=0.247, pruned_loss=0.03909, over 37088.00 frames. ], tot_loss[loss=0.1668, simple_loss=0.2547, pruned_loss=0.03945, over 7128626.26 frames. ], batch size: 94, lr: 2.79e-03, grad_scale: 32.0 2023-05-11 17:08:48,836 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.self_attn_weights.whiten_keys.whitening_limit, batch_count=707100.0, ans=6.0 2023-05-11 17:08:51,694 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=8.23 vs. limit=15.0 2023-05-11 17:08:55,351 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.782e+02 3.610e+02 3.902e+02 4.418e+02 6.244e+02, threshold=7.804e+02, percent-clipped=0.0 2023-05-11 17:09:13,177 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder_embed.conv.5.prob, batch_count=707200.0, ans=0.125 2023-05-11 17:09:14,515 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0009-15840-0 from training. Duration: 24.32 2023-05-11 17:09:23,373 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.balancer1.prob, batch_count=707200.0, ans=0.125 2023-05-11 17:09:57,565 INFO [train.py:1021] (0/2) Epoch 39, batch 3300, loss[loss=0.1673, simple_loss=0.2597, pruned_loss=0.03744, over 34923.00 frames. ], tot_loss[loss=0.1668, simple_loss=0.2548, pruned_loss=0.03942, over 7090072.65 frames. ], batch size: 145, lr: 2.79e-03, grad_scale: 32.0 2023-05-11 17:10:07,994 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.balancer2.prob, batch_count=707350.0, ans=0.125 2023-05-11 17:10:13,569 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-276745-0093-13116-0_sp0.9 from training. Duration: 21.061125 2023-05-11 17:10:27,826 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0024-15855-0_sp0.9 from training. Duration: 20.32225 2023-05-11 17:10:40,688 WARNING [train.py:1182] (0/2) Exclude cut with ID 3033-130750-0096-55598-0_sp1.1 from training. Duration: 0.7545625 2023-05-11 17:10:45,241 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=707500.0, ans=0.0 2023-05-11 17:10:56,772 WARNING [train.py:1182] (0/2) Exclude cut with ID 4295-39940-0007-92567-0_sp0.9 from training. Duration: 23.9333125 2023-05-11 17:11:00,126 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([1.7195, 3.1632, 4.4666, 2.9311], device='cuda:0') 2023-05-11 17:11:09,188 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass_mid.scale_min, batch_count=707550.0, ans=0.2 2023-05-11 17:11:11,790 INFO [train.py:1021] (0/2) Epoch 39, batch 3350, loss[loss=0.1627, simple_loss=0.2529, pruned_loss=0.03625, over 37016.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2552, pruned_loss=0.03952, over 7095035.35 frames. ], batch size: 104, lr: 2.79e-03, grad_scale: 32.0 2023-05-11 17:11:16,616 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 17:11:23,862 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.982e+02 3.379e+02 3.844e+02 4.375e+02 7.349e+02, threshold=7.688e+02, percent-clipped=0.0 2023-05-11 17:11:26,855 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0008-134307-0_sp1.1 from training. Duration: 20.17275 2023-05-11 17:11:32,667 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0019-146985-0_sp1.1 from training. Duration: 20.436375 2023-05-11 17:11:38,983 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=10.68 vs. limit=15.0 2023-05-11 17:11:51,033 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=3.83 vs. limit=12.0 2023-05-11 17:11:56,876 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=707750.0, ans=0.1 2023-05-11 17:11:59,968 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.whiten.whitening_limit, batch_count=707750.0, ans=15.0 2023-05-11 17:12:10,347 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=707800.0, ans=0.125 2023-05-11 17:12:26,071 INFO [train.py:1021] (0/2) Epoch 39, batch 3400, loss[loss=0.1692, simple_loss=0.2666, pruned_loss=0.03592, over 37153.00 frames. ], tot_loss[loss=0.167, simple_loss=0.255, pruned_loss=0.03949, over 7078842.99 frames. ], batch size: 112, lr: 2.79e-03, grad_scale: 32.0 2023-05-11 17:12:52,994 WARNING [train.py:1182] (0/2) Exclude cut with ID 4234-40345-0022-142709-0_sp0.9 from training. Duration: 23.1055625 2023-05-11 17:12:56,006 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0007-12994-0_sp1.1 from training. Duration: 23.5 2023-05-11 17:12:57,637 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=707950.0, ans=0.0 2023-05-11 17:13:05,204 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0009-134308-0_sp0.9 from training. Duration: 26.62775 2023-05-11 17:13:19,868 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0018-132285-0 from training. Duration: 21.105 2023-05-11 17:13:25,475 WARNING [train.py:1182] (0/2) Exclude cut with ID 4511-76322-0006-80011-0_sp0.9 from training. Duration: 24.411125 2023-05-11 17:13:40,282 INFO [train.py:1021] (0/2) Epoch 39, batch 3450, loss[loss=0.167, simple_loss=0.2527, pruned_loss=0.04064, over 36723.00 frames. ], tot_loss[loss=0.1667, simple_loss=0.2547, pruned_loss=0.03937, over 7092065.40 frames. ], batch size: 118, lr: 2.79e-03, grad_scale: 32.0 2023-05-11 17:13:43,620 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.nonlin_attention.balancer.max_positive, batch_count=708100.0, ans=0.95 2023-05-11 17:13:47,835 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=708100.0, ans=0.1 2023-05-11 17:13:50,825 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=708100.0, ans=0.1 2023-05-11 17:13:51,878 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.856e+02 3.630e+02 4.078e+02 4.580e+02 7.017e+02, threshold=8.157e+02, percent-clipped=0.0 2023-05-11 17:13:54,124 WARNING [train.py:1182] (0/2) Exclude cut with ID 6758-72288-0033-108368-0_sp1.1 from training. Duration: 21.263625 2023-05-11 17:13:54,370 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.ff2_skip_rate, batch_count=708150.0, ans=0.0 2023-05-11 17:14:23,562 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=4.71 vs. limit=15.0 2023-05-11 17:14:27,265 WARNING [train.py:1182] (0/2) Exclude cut with ID 4234-40345-0022-142709-0 from training. Duration: 20.795 2023-05-11 17:14:39,547 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0021-15852-0 from training. Duration: 24.76 2023-05-11 17:14:39,559 WARNING [train.py:1182] (0/2) Exclude cut with ID 3867-173237-0077-144769-0_sp0.9 from training. Duration: 22.25 2023-05-11 17:14:54,545 INFO [train.py:1021] (0/2) Epoch 39, batch 3500, loss[loss=0.1733, simple_loss=0.2649, pruned_loss=0.04083, over 36848.00 frames. ], tot_loss[loss=0.1663, simple_loss=0.2545, pruned_loss=0.03909, over 7110210.39 frames. ], batch size: 113, lr: 2.79e-03, grad_scale: 32.0 2023-05-11 17:15:06,164 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0026-15857-0_sp1.1 from training. Duration: 20.5045625 2023-05-11 17:15:12,403 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer1.prob, batch_count=708400.0, ans=0.125 2023-05-11 17:15:43,260 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.out_combiner.scale_min, batch_count=708500.0, ans=0.2 2023-05-11 17:15:48,916 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.1860, 4.5221, 4.6689, 4.3908], device='cuda:0') 2023-05-11 17:15:51,976 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=9.27 vs. limit=15.0 2023-05-11 17:15:54,174 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.attention_skip_rate, batch_count=708550.0, ans=0.0 2023-05-11 17:16:02,789 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.scale_min, batch_count=708550.0, ans=0.2 2023-05-11 17:16:06,838 INFO [train.py:1021] (0/2) Epoch 39, batch 3550, loss[loss=0.1638, simple_loss=0.2537, pruned_loss=0.03692, over 36923.00 frames. ], tot_loss[loss=0.1674, simple_loss=0.2556, pruned_loss=0.03957, over 7084749.07 frames. ], batch size: 100, lr: 2.78e-03, grad_scale: 32.0 2023-05-11 17:16:17,827 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.798e+02 3.458e+02 3.906e+02 4.682e+02 6.911e+02, threshold=7.813e+02, percent-clipped=0.0 2023-05-11 17:16:37,920 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass_mid.scale_min, batch_count=708700.0, ans=0.2 2023-05-11 17:16:59,403 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=708750.0, ans=0.0 2023-05-11 17:17:02,692 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=7.72 vs. limit=22.5 2023-05-11 17:17:05,603 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.ff3_skip_rate, batch_count=708800.0, ans=0.0 2023-05-11 17:17:17,900 INFO [train.py:1021] (0/2) Epoch 39, batch 3600, loss[loss=0.1757, simple_loss=0.271, pruned_loss=0.0402, over 36945.00 frames. ], tot_loss[loss=0.1664, simple_loss=0.2545, pruned_loss=0.03914, over 7123001.93 frames. ], batch size: 108, lr: 2.78e-03, grad_scale: 32.0 2023-05-11 17:17:18,143 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module2.balancer1.max_abs, batch_count=708850.0, ans=10.0 2023-05-11 17:17:28,549 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.3707, 4.6153, 2.2849, 2.4634], device='cuda:0') 2023-05-11 17:17:32,528 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer2.prob, batch_count=708900.0, ans=0.125 2023-05-11 17:17:38,242 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass.skip_rate, batch_count=708900.0, ans=0.07 2023-05-11 17:17:39,707 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([4.8114, 4.0146, 4.3800, 4.4222], device='cuda:0') 2023-05-11 17:17:53,401 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_skip_rate, batch_count=708950.0, ans=0.0 2023-05-11 17:18:07,460 INFO [checkpoint.py:75] (0/2) Saving checkpoint to pruned_transducer_stateless7/exp1119-smaller-md1500/epoch-39.pt 2023-05-11 17:18:21,675 WARNING [train.py:1182] (0/2) Exclude cut with ID 7859-102521-0017-7548-0_sp1.1 from training. Duration: 22.2954375 2023-05-11 17:18:26,743 INFO [train.py:1021] (0/2) Epoch 40, batch 0, loss[loss=0.1785, simple_loss=0.2741, pruned_loss=0.04139, over 36703.00 frames. ], tot_loss[loss=0.1785, simple_loss=0.2741, pruned_loss=0.04139, over 36703.00 frames. ], batch size: 118, lr: 2.75e-03, grad_scale: 32.0 2023-05-11 17:18:26,743 INFO [train.py:1048] (0/2) Computing validation loss 2023-05-11 17:18:33,682 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.5154, 3.2697, 3.0330, 3.6583, 2.3316, 3.3256, 3.7329, 3.3736], device='cuda:0') 2023-05-11 17:18:40,255 INFO [train.py:1057] (0/2) Epoch 40, validation: loss=0.1516, simple_loss=0.2518, pruned_loss=0.02564, over 944034.00 frames. 2023-05-11 17:18:40,256 INFO [train.py:1058] (0/2) Maximum memory allocated so far is 18788MB 2023-05-11 17:18:58,555 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=1.75 vs. limit=6.0 2023-05-11 17:19:12,677 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.756e+02 3.581e+02 3.992e+02 4.780e+02 8.880e+02, threshold=7.985e+02, percent-clipped=1.0 2023-05-11 17:19:21,807 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=709130.0, ans=0.125 2023-05-11 17:19:22,316 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=7.72 vs. limit=22.5 2023-05-11 17:19:33,837 WARNING [train.py:1182] (0/2) Exclude cut with ID 298-126791-0067-24026-0_sp0.9 from training. Duration: 21.438875 2023-05-11 17:19:38,555 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.1807, 4.5464, 3.3069, 3.0896], device='cuda:0') 2023-05-11 17:19:39,667 WARNING [train.py:1182] (0/2) Exclude cut with ID 5652-39938-0025-23684-0_sp0.9 from training. Duration: 22.2055625 2023-05-11 17:19:54,109 INFO [train.py:1021] (0/2) Epoch 40, batch 50, loss[loss=0.1609, simple_loss=0.2508, pruned_loss=0.03546, over 37157.00 frames. ], tot_loss[loss=0.1611, simple_loss=0.2537, pruned_loss=0.03422, over 1630518.10 frames. ], batch size: 98, lr: 2.75e-03, grad_scale: 32.0 2023-05-11 17:20:05,233 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer1.prob, batch_count=709280.0, ans=0.125 2023-05-11 17:20:35,940 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=709380.0, ans=0.125 2023-05-11 17:20:47,441 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.9074, 5.3335, 5.1150, 5.7204], device='cuda:0') 2023-05-11 17:21:07,917 INFO [train.py:1021] (0/2) Epoch 40, batch 100, loss[loss=0.1575, simple_loss=0.251, pruned_loss=0.03194, over 36929.00 frames. ], tot_loss[loss=0.1612, simple_loss=0.2544, pruned_loss=0.03402, over 2892790.50 frames. ], batch size: 100, lr: 2.75e-03, grad_scale: 32.0 2023-05-11 17:21:36,581 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.2652, 5.0512, 4.4523, 4.8166], device='cuda:0') 2023-05-11 17:21:40,646 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.529e+02 3.098e+02 3.751e+02 4.632e+02 7.417e+02, threshold=7.501e+02, percent-clipped=0.0 2023-05-11 17:22:19,251 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.ff3_skip_rate, batch_count=709730.0, ans=0.0 2023-05-11 17:22:21,846 INFO [train.py:1021] (0/2) Epoch 40, batch 150, loss[loss=0.1487, simple_loss=0.2349, pruned_loss=0.03126, over 36763.00 frames. ], tot_loss[loss=0.1601, simple_loss=0.2528, pruned_loss=0.03371, over 3824326.84 frames. ], batch size: 89, lr: 2.75e-03, grad_scale: 32.0 2023-05-11 17:22:29,332 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=709780.0, ans=0.1 2023-05-11 17:22:30,666 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=709780.0, ans=0.1 2023-05-11 17:22:39,653 WARNING [train.py:1182] (0/2) Exclude cut with ID 7859-102521-0017-7548-0 from training. Duration: 24.525 2023-05-11 17:23:06,607 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer1.prob, batch_count=709930.0, ans=0.125 2023-05-11 17:23:16,527 WARNING [train.py:1182] (0/2) Exclude cut with ID 3699-47246-0007-3408-0_sp0.9 from training. Duration: 20.26675 2023-05-11 17:23:31,257 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=256, metric=10.74 vs. limit=15.0 2023-05-11 17:23:31,892 WARNING [train.py:1182] (0/2) Exclude cut with ID 7859-102521-0017-7548-0_sp0.9 from training. Duration: 27.25 2023-05-11 17:23:36,173 INFO [train.py:1021] (0/2) Epoch 40, batch 200, loss[loss=0.1675, simple_loss=0.2667, pruned_loss=0.03417, over 34770.00 frames. ], tot_loss[loss=0.1597, simple_loss=0.252, pruned_loss=0.03366, over 4585831.62 frames. ], batch size: 145, lr: 2.75e-03, grad_scale: 32.0 2023-05-11 17:23:42,398 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([4.9651, 4.1303, 4.5001, 4.5291], device='cuda:0') 2023-05-11 17:24:08,681 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.602e+02 3.016e+02 3.354e+02 4.022e+02 7.640e+02, threshold=6.708e+02, percent-clipped=1.0 2023-05-11 17:24:20,614 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.const_attention_rate, batch_count=710180.0, ans=0.025 2023-05-11 17:24:25,763 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward3.out_whiten.whitening_limit, batch_count=710180.0, ans=15.0 2023-05-11 17:24:38,248 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass.skip_rate, batch_count=710230.0, ans=0.035 2023-05-11 17:24:44,213 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_skip_rate, batch_count=710230.0, ans=0.0 2023-05-11 17:24:45,278 WARNING [train.py:1182] (0/2) Exclude cut with ID 6426-64292-0017-15984-0 from training. Duration: 21.68 2023-05-11 17:24:49,553 INFO [train.py:1021] (0/2) Epoch 40, batch 250, loss[loss=0.1588, simple_loss=0.2555, pruned_loss=0.0311, over 36952.00 frames. ], tot_loss[loss=0.1594, simple_loss=0.252, pruned_loss=0.03344, over 5142501.68 frames. ], batch size: 108, lr: 2.75e-03, grad_scale: 32.0 2023-05-11 17:24:57,382 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0007-59342-0 from training. Duration: 21.6300625 2023-05-11 17:25:18,771 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=710380.0, ans=0.1 2023-05-11 17:25:22,724 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0007-59342-0_sp0.9 from training. Duration: 24.033375 2023-05-11 17:25:29,909 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder_embed.convnext.layerdrop_rate, batch_count=710380.0, ans=0.015 2023-05-11 17:26:03,620 INFO [train.py:1021] (0/2) Epoch 40, batch 300, loss[loss=0.1614, simple_loss=0.2529, pruned_loss=0.03496, over 36747.00 frames. ], tot_loss[loss=0.1591, simple_loss=0.2516, pruned_loss=0.0333, over 5602130.42 frames. ], batch size: 118, lr: 2.75e-03, grad_scale: 32.0 2023-05-11 17:26:21,658 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0009-59344-0 from training. Duration: 22.905 2023-05-11 17:26:21,817 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.ff2_skip_rate, batch_count=710580.0, ans=0.0 2023-05-11 17:26:23,217 WARNING [train.py:1182] (0/2) Exclude cut with ID 5622-44585-0006-90525-0_sp1.1 from training. Duration: 23.4318125 2023-05-11 17:26:23,466 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=710580.0, ans=0.0 2023-05-11 17:26:25,032 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer2.prob, batch_count=710580.0, ans=0.125 2023-05-11 17:26:28,018 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=710580.0, ans=0.125 2023-05-11 17:26:36,268 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.546e+02 3.034e+02 3.912e+02 4.598e+02 7.211e+02, threshold=7.824e+02, percent-clipped=1.0 2023-05-11 17:26:41,536 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass.scale_min, batch_count=710630.0, ans=0.2 2023-05-11 17:26:54,743 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 17:26:57,653 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=710680.0, ans=0.1 2023-05-11 17:27:04,017 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass.scale_min, batch_count=710730.0, ans=0.2 2023-05-11 17:27:05,556 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 17:27:14,081 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=710730.0, ans=0.125 2023-05-11 17:27:16,083 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=256, metric=11.20 vs. limit=15.0 2023-05-11 17:27:17,992 INFO [train.py:1021] (0/2) Epoch 40, batch 350, loss[loss=0.1666, simple_loss=0.2646, pruned_loss=0.03424, over 34665.00 frames. ], tot_loss[loss=0.1597, simple_loss=0.2522, pruned_loss=0.03356, over 5937740.00 frames. ], batch size: 144, lr: 2.74e-03, grad_scale: 32.0 2023-05-11 17:27:45,112 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=710830.0, ans=0.125 2023-05-11 17:28:13,012 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=710930.0, ans=0.125 2023-05-11 17:28:17,425 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=710980.0, ans=0.1 2023-05-11 17:28:24,470 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0009-59344-0_sp1.1 from training. Duration: 20.82275 2023-05-11 17:28:26,368 WARNING [train.py:1182] (0/2) Exclude cut with ID 4278-13270-0009-59344-0_sp0.9 from training. Duration: 25.45 2023-05-11 17:28:32,089 INFO [train.py:1021] (0/2) Epoch 40, batch 400, loss[loss=0.1726, simple_loss=0.2675, pruned_loss=0.03886, over 36688.00 frames. ], tot_loss[loss=0.1601, simple_loss=0.2531, pruned_loss=0.03356, over 6244037.64 frames. ], batch size: 118, lr: 2.74e-03, grad_scale: 32.0 2023-05-11 17:28:39,610 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff3_skip_rate, batch_count=711030.0, ans=0.0 2023-05-11 17:29:04,528 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.347e+02 2.909e+02 3.282e+02 3.561e+02 7.602e+02, threshold=6.564e+02, percent-clipped=0.0 2023-05-11 17:29:11,985 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=711130.0, ans=0.1 2023-05-11 17:29:23,681 WARNING [train.py:1182] (0/2) Exclude cut with ID 5622-44585-0006-90525-0 from training. Duration: 25.775 2023-05-11 17:29:44,599 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0071-62375-0_sp0.9 from training. Duration: 22.25 2023-05-11 17:29:46,031 INFO [train.py:1021] (0/2) Epoch 40, batch 450, loss[loss=0.1494, simple_loss=0.2361, pruned_loss=0.03138, over 34191.00 frames. ], tot_loss[loss=0.1607, simple_loss=0.2538, pruned_loss=0.03376, over 6432674.36 frames. ], batch size: 75, lr: 2.74e-03, grad_scale: 32.0 2023-05-11 17:30:08,626 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=711330.0, ans=0.125 2023-05-11 17:30:12,759 WARNING [train.py:1182] (0/2) Exclude cut with ID 3972-170212-0014-23379-0 from training. Duration: 26.205 2023-05-11 17:30:24,883 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.2790, 4.3853, 2.2143, 2.3915], device='cuda:0') 2023-05-11 17:30:30,165 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0047-9341-0_sp0.9 from training. Duration: 30.1555625 2023-05-11 17:30:31,852 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.bypass_mid.scale_min, batch_count=711430.0, ans=0.2 2023-05-11 17:30:36,708 WARNING [train.py:1182] (0/2) Exclude cut with ID 1265-135635-0050-6781-0_sp0.9 from training. Duration: 21.8333125 2023-05-11 17:30:36,982 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.ff3_skip_rate, batch_count=711430.0, ans=0.0 2023-05-11 17:30:38,383 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=711430.0, ans=0.0 2023-05-11 17:30:46,656 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0024-94914-0_sp1.1 from training. Duration: 20.6545625 2023-05-11 17:30:52,885 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer1.prob, batch_count=711480.0, ans=0.125 2023-05-11 17:30:54,436 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer2.min_positive, batch_count=711480.0, ans=0.05 2023-05-11 17:30:59,732 INFO [train.py:1021] (0/2) Epoch 40, batch 500, loss[loss=0.1666, simple_loss=0.2652, pruned_loss=0.03402, over 36937.00 frames. ], tot_loss[loss=0.1612, simple_loss=0.2545, pruned_loss=0.03394, over 6614139.39 frames. ], batch size: 105, lr: 2.74e-03, grad_scale: 32.0 2023-05-11 17:31:23,956 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.hidden_balancer.prob, batch_count=711580.0, ans=0.125 2023-05-11 17:31:31,582 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0045-39920-0_sp0.9 from training. Duration: 20.52225 2023-05-11 17:31:32,844 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.266e+02 3.110e+02 3.435e+02 3.873e+02 6.214e+02, threshold=6.870e+02, percent-clipped=0.0 2023-05-11 17:31:51,865 WARNING [train.py:1182] (0/2) Exclude cut with ID 3972-170212-0014-23379-0_sp0.9 from training. Duration: 29.1166875 2023-05-11 17:31:53,497 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_skip_rate, batch_count=711680.0, ans=0.0 2023-05-11 17:32:13,876 INFO [train.py:1021] (0/2) Epoch 40, batch 550, loss[loss=0.1675, simple_loss=0.2666, pruned_loss=0.0342, over 36951.00 frames. ], tot_loss[loss=0.1613, simple_loss=0.2546, pruned_loss=0.03395, over 6777744.98 frames. ], batch size: 108, lr: 2.74e-03, grad_scale: 32.0 2023-05-11 17:32:14,238 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=711780.0, ans=0.0 2023-05-11 17:32:27,452 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=711830.0, ans=0.1 2023-05-11 17:32:40,548 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.6366, 3.3890, 3.1644, 3.9726, 2.2829, 3.4417, 3.9945, 3.5213], device='cuda:0') 2023-05-11 17:32:49,607 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module1.balancer1.prob, batch_count=711880.0, ans=0.125 2023-05-11 17:32:50,891 WARNING [train.py:1182] (0/2) Exclude cut with ID 543-133211-0007-59831-0_sp0.9 from training. Duration: 21.388875 2023-05-11 17:33:11,634 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=8.04 vs. limit=15.0 2023-05-11 17:33:27,710 INFO [train.py:1021] (0/2) Epoch 40, batch 600, loss[loss=0.1602, simple_loss=0.2499, pruned_loss=0.03523, over 37172.00 frames. ], tot_loss[loss=0.162, simple_loss=0.2554, pruned_loss=0.03428, over 6901733.99 frames. ], batch size: 93, lr: 2.74e-03, grad_scale: 32.0 2023-05-11 17:33:29,301 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0024-94914-0 from training. Duration: 22.72 2023-05-11 17:33:30,751 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0031-94921-0_sp0.9 from training. Duration: 22.7444375 2023-05-11 17:33:41,635 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.balancer2.prob, batch_count=712080.0, ans=0.125 2023-05-11 17:33:59,570 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=14.13 vs. limit=22.5 2023-05-11 17:34:00,037 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.467e+02 3.163e+02 3.494e+02 4.257e+02 7.017e+02, threshold=6.987e+02, percent-clipped=1.0 2023-05-11 17:34:13,454 WARNING [train.py:1182] (0/2) Exclude cut with ID 4133-6541-0027-40495-0_sp1.1 from training. Duration: 0.9681875 2023-05-11 17:34:17,729 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62851-0022-91297-0_sp0.9 from training. Duration: 22.3166875 2023-05-11 17:34:20,918 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=712180.0, ans=0.0 2023-05-11 17:34:22,074 WARNING [train.py:1182] (0/2) Exclude cut with ID 543-133212-0015-59917-0_sp0.9 from training. Duration: 21.8166875 2023-05-11 17:34:37,219 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.attention_skip_rate, batch_count=712230.0, ans=0.0 2023-05-11 17:34:41,306 INFO [train.py:1021] (0/2) Epoch 40, batch 650, loss[loss=0.1708, simple_loss=0.2691, pruned_loss=0.03623, over 32541.00 frames. ], tot_loss[loss=0.1619, simple_loss=0.2552, pruned_loss=0.03435, over 6951273.92 frames. ], batch size: 171, lr: 2.74e-03, grad_scale: 16.0 2023-05-11 17:34:56,116 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer1.prob, batch_count=712330.0, ans=0.125 2023-05-11 17:35:17,025 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward3.hidden_balancer.prob, batch_count=712380.0, ans=0.125 2023-05-11 17:35:17,035 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_skip_rate, batch_count=712380.0, ans=0.0 2023-05-11 17:35:17,464 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.whiten, num_groups=1, num_channels=256, metric=2.92 vs. limit=12.0 2023-05-11 17:35:37,685 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer2.min_positive, batch_count=712430.0, ans=0.05 2023-05-11 17:35:54,984 INFO [train.py:1021] (0/2) Epoch 40, batch 700, loss[loss=0.1726, simple_loss=0.2743, pruned_loss=0.03546, over 37070.00 frames. ], tot_loss[loss=0.162, simple_loss=0.2555, pruned_loss=0.03428, over 7003180.07 frames. ], batch size: 110, lr: 2.74e-03, grad_scale: 16.0 2023-05-11 17:35:57,331 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=712530.0, ans=0.125 2023-05-11 17:36:04,073 WARNING [train.py:1182] (0/2) Exclude cut with ID 4957-30119-0041-23990-0_sp0.9 from training. Duration: 20.22775 2023-05-11 17:36:29,011 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.413e+02 3.301e+02 3.718e+02 4.909e+02 8.615e+02, threshold=7.436e+02, percent-clipped=3.0 2023-05-11 17:36:31,248 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=256, metric=3.90 vs. limit=15.0 2023-05-11 17:36:32,183 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=712630.0, ans=0.125 2023-05-11 17:36:42,333 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=712680.0, ans=0.125 2023-05-11 17:36:47,559 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0047-9341-0_sp1.1 from training. Duration: 24.67275 2023-05-11 17:37:05,964 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=712730.0, ans=0.1 2023-05-11 17:37:08,107 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.out_combiner.scale_min, batch_count=712780.0, ans=0.2 2023-05-11 17:37:09,190 INFO [train.py:1021] (0/2) Epoch 40, batch 750, loss[loss=0.1586, simple_loss=0.2483, pruned_loss=0.03441, over 37038.00 frames. ], tot_loss[loss=0.1624, simple_loss=0.2558, pruned_loss=0.03452, over 7038205.14 frames. ], batch size: 99, lr: 2.74e-03, grad_scale: 16.0 2023-05-11 17:37:12,293 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.bypass.scale_min, batch_count=712780.0, ans=0.2 2023-05-11 17:37:19,398 WARNING [train.py:1182] (0/2) Exclude cut with ID 3082-165428-0081-50734-0_sp0.9 from training. Duration: 21.8055625 2023-05-11 17:37:42,814 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=712880.0, ans=0.125 2023-05-11 17:37:43,383 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=5.13 vs. limit=12.0 2023-05-11 17:37:57,674 WARNING [train.py:1182] (0/2) Exclude cut with ID 3340-169293-0054-76830-0_sp0.9 from training. Duration: 22.6666875 2023-05-11 17:38:22,918 INFO [train.py:1021] (0/2) Epoch 40, batch 800, loss[loss=0.1436, simple_loss=0.2345, pruned_loss=0.02639, over 37064.00 frames. ], tot_loss[loss=0.1618, simple_loss=0.2549, pruned_loss=0.03431, over 7081673.75 frames. ], batch size: 94, lr: 2.74e-03, grad_scale: 32.0 2023-05-11 17:38:33,633 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.5959, 4.9126, 5.0918, 4.7515], device='cuda:0') 2023-05-11 17:38:47,404 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.balancer2.prob, batch_count=713080.0, ans=0.125 2023-05-11 17:38:57,548 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.620e+02 3.069e+02 3.484e+02 4.115e+02 6.952e+02, threshold=6.968e+02, percent-clipped=0.0 2023-05-11 17:38:59,295 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=713130.0, ans=0.1 2023-05-11 17:39:00,515 WARNING [train.py:1182] (0/2) Exclude cut with ID 2411-132532-0017-82279-0_sp1.1 from training. Duration: 0.9681875 2023-05-11 17:39:04,136 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=7.24 vs. limit=15.0 2023-05-11 17:39:13,911 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.out_combiner.scale_min, batch_count=713180.0, ans=0.2 2023-05-11 17:39:16,757 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass.scale_min, batch_count=713180.0, ans=0.2 2023-05-11 17:39:27,997 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62850-0007-91323-0 from training. Duration: 22.485 2023-05-11 17:39:36,907 INFO [train.py:1021] (0/2) Epoch 40, batch 850, loss[loss=0.1391, simple_loss=0.2242, pruned_loss=0.02703, over 36859.00 frames. ], tot_loss[loss=0.1613, simple_loss=0.2547, pruned_loss=0.03399, over 7135382.51 frames. ], batch size: 84, lr: 2.74e-03, grad_scale: 32.0 2023-05-11 17:39:56,644 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer2.prob, batch_count=713330.0, ans=0.125 2023-05-11 17:39:59,593 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([6.0761, 5.2927, 5.3660, 5.9362], device='cuda:0') 2023-05-11 17:40:08,092 WARNING [train.py:1182] (0/2) Exclude cut with ID 3972-170212-0014-23379-0_sp1.1 from training. Duration: 23.82275 2023-05-11 17:40:22,247 WARNING [train.py:1182] (0/2) Exclude cut with ID 4860-13185-0032-76709-0 from training. Duration: 20.77 2023-05-11 17:40:31,624 WARNING [train.py:1182] (0/2) Exclude cut with ID 6426-64292-0017-15984-0_sp0.9 from training. Duration: 24.088875 2023-05-11 17:40:50,947 INFO [train.py:1021] (0/2) Epoch 40, batch 900, loss[loss=0.1629, simple_loss=0.2595, pruned_loss=0.03312, over 37012.00 frames. ], tot_loss[loss=0.1609, simple_loss=0.2541, pruned_loss=0.03382, over 7168785.05 frames. ], batch size: 104, lr: 2.74e-03, grad_scale: 16.0 2023-05-11 17:40:54,178 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=713530.0, ans=0.1 2023-05-11 17:41:02,608 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62850-0007-91323-0_sp1.1 from training. Duration: 20.4409375 2023-05-11 17:41:15,831 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=713580.0, ans=0.125 2023-05-11 17:41:26,216 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.588e+02 3.100e+02 3.505e+02 4.131e+02 6.658e+02, threshold=7.010e+02, percent-clipped=0.0 2023-05-11 17:42:04,447 INFO [train.py:1021] (0/2) Epoch 40, batch 950, loss[loss=0.1358, simple_loss=0.2203, pruned_loss=0.02565, over 36938.00 frames. ], tot_loss[loss=0.1613, simple_loss=0.2546, pruned_loss=0.03399, over 7151883.23 frames. ], batch size: 86, lr: 2.74e-03, grad_scale: 16.0 2023-05-11 17:42:16,705 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0062-62366-0_sp0.9 from training. Duration: 22.511125 2023-05-11 17:42:18,157 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0031-39906-0 from training. Duration: 20.675 2023-05-11 17:42:36,107 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.1222, 4.1262, 3.8247, 4.1606, 3.5349, 3.2512, 3.5778, 3.1489], device='cuda:0') 2023-05-11 17:42:50,786 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=713930.0, ans=0.125 2023-05-11 17:42:59,874 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=4.24 vs. limit=15.0 2023-05-11 17:43:11,976 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module1.balancer1.min_positive, batch_count=713980.0, ans=0.025 2023-05-11 17:43:13,755 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.hidden_balancer.prob, batch_count=713980.0, ans=0.125 2023-05-11 17:43:19,365 INFO [train.py:1021] (0/2) Epoch 40, batch 1000, loss[loss=0.1689, simple_loss=0.2641, pruned_loss=0.03681, over 37034.00 frames. ], tot_loss[loss=0.1608, simple_loss=0.2539, pruned_loss=0.03384, over 7154356.29 frames. ], batch size: 104, lr: 2.74e-03, grad_scale: 16.0 2023-05-11 17:43:26,942 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module1.balancer2.prob, batch_count=714030.0, ans=0.125 2023-05-11 17:43:54,211 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.597e+02 3.262e+02 3.782e+02 4.527e+02 7.003e+02, threshold=7.565e+02, percent-clipped=0.0 2023-05-11 17:43:54,311 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62850-0007-91323-0_sp0.9 from training. Duration: 24.9833125 2023-05-11 17:44:18,866 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=714230.0, ans=0.125 2023-05-11 17:44:25,946 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0047-9341-0 from training. Duration: 27.14 2023-05-11 17:44:33,184 INFO [train.py:1021] (0/2) Epoch 40, batch 1050, loss[loss=0.1758, simple_loss=0.2754, pruned_loss=0.03813, over 34464.00 frames. ], tot_loss[loss=0.1613, simple_loss=0.2545, pruned_loss=0.034, over 7171335.05 frames. ], batch size: 144, lr: 2.74e-03, grad_scale: 16.0 2023-05-11 17:44:42,005 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0044-62348-0 from training. Duration: 22.44 2023-05-11 17:45:41,833 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass_mid.scale_min, batch_count=714480.0, ans=0.2 2023-05-11 17:45:47,115 INFO [train.py:1021] (0/2) Epoch 40, batch 1100, loss[loss=0.1553, simple_loss=0.2495, pruned_loss=0.0305, over 36915.00 frames. ], tot_loss[loss=0.1611, simple_loss=0.2545, pruned_loss=0.03387, over 7201080.10 frames. ], batch size: 100, lr: 2.74e-03, grad_scale: 16.0 2023-05-11 17:45:53,394 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=256, metric=11.67 vs. limit=15.0 2023-05-11 17:46:02,427 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0060-62364-0_sp0.9 from training. Duration: 21.361125 2023-05-11 17:46:08,204 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0079-62383-0_sp1.1 from training. Duration: 27.0318125 2023-05-11 17:46:11,371 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.ff2_skip_rate, batch_count=714580.0, ans=0.0 2023-05-11 17:46:17,023 WARNING [train.py:1182] (0/2) Exclude cut with ID 5622-44585-0006-90525-0_sp0.9 from training. Duration: 28.638875 2023-05-11 17:46:22,772 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.629e+02 3.123e+02 3.377e+02 3.955e+02 6.198e+02, threshold=6.754e+02, percent-clipped=0.0 2023-05-11 17:46:33,037 WARNING [train.py:1182] (0/2) Exclude cut with ID 3340-169293-0054-76830-0 from training. Duration: 20.4 2023-05-11 17:47:01,674 INFO [train.py:1021] (0/2) Epoch 40, batch 1150, loss[loss=0.1521, simple_loss=0.2354, pruned_loss=0.03439, over 34029.00 frames. ], tot_loss[loss=0.1611, simple_loss=0.2546, pruned_loss=0.0338, over 7191001.96 frames. ], batch size: 75, lr: 2.74e-03, grad_scale: 16.0 2023-05-11 17:47:06,288 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0071-62375-0 from training. Duration: 20.025 2023-05-11 17:47:06,297 WARNING [train.py:1182] (0/2) Exclude cut with ID 2364-131735-0112-64612-0_sp0.9 from training. Duration: 20.488875 2023-05-11 17:47:12,113 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0079-62383-0 from training. Duration: 29.735 2023-05-11 17:47:31,039 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.const_attention_rate, batch_count=714880.0, ans=0.025 2023-05-11 17:47:52,263 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=714930.0, ans=0.125 2023-05-11 17:48:02,058 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer1.prob, batch_count=714980.0, ans=0.125 2023-05-11 17:48:12,418 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module2.balancer1.min_positive, batch_count=714980.0, ans=0.025 2023-05-11 17:48:15,042 INFO [train.py:1021] (0/2) Epoch 40, batch 1200, loss[loss=0.1758, simple_loss=0.2662, pruned_loss=0.04269, over 36823.00 frames. ], tot_loss[loss=0.1615, simple_loss=0.2548, pruned_loss=0.03411, over 7187471.26 frames. ], batch size: 113, lr: 2.74e-03, grad_scale: 32.0 2023-05-11 17:48:29,519 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.balancer1.prob, batch_count=715080.0, ans=0.125 2023-05-11 17:48:33,514 WARNING [train.py:1182] (0/2) Exclude cut with ID 7276-92427-0014-12983-0_sp0.9 from training. Duration: 21.3055625 2023-05-11 17:48:33,561 WARNING [train.py:1182] (0/2) Exclude cut with ID 1025-75365-0008-79168-0_sp0.9 from training. Duration: 22.0666875 2023-05-11 17:48:39,795 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.0941, 2.7243, 4.1876, 2.9653], device='cuda:0') 2023-05-11 17:48:51,000 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.537e+02 3.055e+02 3.478e+02 4.155e+02 7.184e+02, threshold=6.955e+02, percent-clipped=2.0 2023-05-11 17:49:14,725 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=3.74 vs. limit=15.0 2023-05-11 17:49:29,578 INFO [train.py:1021] (0/2) Epoch 40, batch 1250, loss[loss=0.1565, simple_loss=0.256, pruned_loss=0.0285, over 36915.00 frames. ], tot_loss[loss=0.1614, simple_loss=0.2547, pruned_loss=0.03409, over 7201133.94 frames. ], batch size: 105, lr: 2.74e-03, grad_scale: 16.0 2023-05-11 17:49:40,261 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=3.75 vs. limit=10.0 2023-05-11 17:49:44,420 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=256, metric=8.04 vs. limit=22.5 2023-05-11 17:50:07,556 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=715380.0, ans=0.125 2023-05-11 17:50:14,803 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.attention_skip_rate, batch_count=715430.0, ans=0.0 2023-05-11 17:50:16,555 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=2.54 vs. limit=6.0 2023-05-11 17:50:20,960 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=10.45 vs. limit=15.0 2023-05-11 17:50:24,620 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0062-62366-0 from training. Duration: 20.26 2023-05-11 17:50:38,924 WARNING [train.py:1182] (0/2) Exclude cut with ID 5239-32139-0030-9324-0_sp0.9 from training. Duration: 21.3444375 2023-05-11 17:50:42,059 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=715530.0, ans=0.1 2023-05-11 17:50:43,212 INFO [train.py:1021] (0/2) Epoch 40, batch 1300, loss[loss=0.1448, simple_loss=0.2366, pruned_loss=0.02652, over 37066.00 frames. ], tot_loss[loss=0.1612, simple_loss=0.2545, pruned_loss=0.03392, over 7209684.70 frames. ], batch size: 94, lr: 2.74e-03, grad_scale: 16.0 2023-05-11 17:51:08,967 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.1999, 4.1168, 3.7956, 4.1096, 3.4832, 3.0365, 3.4882, 3.1432], device='cuda:0') 2023-05-11 17:51:13,067 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=715630.0, ans=0.1 2023-05-11 17:51:20,053 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.583e+02 3.257e+02 3.674e+02 4.336e+02 7.011e+02, threshold=7.347e+02, percent-clipped=1.0 2023-05-11 17:51:35,771 WARNING [train.py:1182] (0/2) Exclude cut with ID 497-129325-0061-62254-0_sp1.1 from training. Duration: 0.97725 2023-05-11 17:51:56,612 INFO [train.py:1021] (0/2) Epoch 40, batch 1350, loss[loss=0.1766, simple_loss=0.2657, pruned_loss=0.04374, over 24597.00 frames. ], tot_loss[loss=0.1613, simple_loss=0.2547, pruned_loss=0.03399, over 7182326.69 frames. ], batch size: 233, lr: 2.74e-03, grad_scale: 16.0 2023-05-11 17:51:58,484 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.6270, 4.9586, 5.0658, 4.7342], device='cuda:0') 2023-05-11 17:52:02,296 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=256, metric=7.66 vs. limit=22.5 2023-05-11 17:52:13,776 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_skip_rate, batch_count=715830.0, ans=0.0 2023-05-11 17:52:14,088 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=3.86 vs. limit=15.0 2023-05-11 17:52:17,960 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0031-39906-0_sp0.9 from training. Duration: 22.97225 2023-05-11 17:52:24,100 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=715830.0, ans=0.1 2023-05-11 17:52:27,543 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=7.23 vs. limit=15.0 2023-05-11 17:52:50,604 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0047-39922-0_sp0.9 from training. Duration: 21.97775 2023-05-11 17:53:03,909 WARNING [train.py:1182] (0/2) Exclude cut with ID 1112-1043-0006-89194-0_sp0.9 from training. Duration: 21.8333125 2023-05-11 17:53:05,644 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.ff3_skip_rate, batch_count=715980.0, ans=0.0 2023-05-11 17:53:05,707 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.5655, 3.9451, 4.2258, 3.9571], device='cuda:0') 2023-05-11 17:53:11,037 INFO [train.py:1021] (0/2) Epoch 40, batch 1400, loss[loss=0.1546, simple_loss=0.2386, pruned_loss=0.03529, over 36821.00 frames. ], tot_loss[loss=0.161, simple_loss=0.2543, pruned_loss=0.03381, over 7189857.91 frames. ], batch size: 96, lr: 2.73e-03, grad_scale: 16.0 2023-05-11 17:53:15,329 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0031-94921-0 from training. Duration: 20.47 2023-05-11 17:53:30,955 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=3.87 vs. limit=6.0 2023-05-11 17:53:47,858 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.565e+02 3.143e+02 3.588e+02 4.567e+02 7.295e+02, threshold=7.176e+02, percent-clipped=0.0 2023-05-11 17:53:52,236 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=12.18 vs. limit=15.0 2023-05-11 17:54:11,960 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=716230.0, ans=0.125 2023-05-11 17:54:17,782 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.balancer.min_positive, batch_count=716230.0, ans=0.05 2023-05-11 17:54:23,217 WARNING [train.py:1182] (0/2) Exclude cut with ID 7395-89880-0037-39912-0_sp0.9 from training. Duration: 20.67225 2023-05-11 17:54:24,648 INFO [train.py:1021] (0/2) Epoch 40, batch 1450, loss[loss=0.1822, simple_loss=0.277, pruned_loss=0.04376, over 36696.00 frames. ], tot_loss[loss=0.161, simple_loss=0.2544, pruned_loss=0.0338, over 7205070.25 frames. ], batch size: 122, lr: 2.73e-03, grad_scale: 16.0 2023-05-11 17:54:28,763 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.self_attn1.whiten, num_groups=1, num_channels=192, metric=11.25 vs. limit=22.5 2023-05-11 17:54:35,792 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass.scale_min, batch_count=716280.0, ans=0.2 2023-05-11 17:54:43,128 WARNING [train.py:1182] (0/2) Exclude cut with ID 1914-133440-0024-94914-0_sp0.9 from training. Duration: 25.2444375 2023-05-11 17:54:57,847 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward2.hidden_balancer.prob, batch_count=716380.0, ans=0.125 2023-05-11 17:55:02,337 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass_mid.scale_min, batch_count=716380.0, ans=0.2 2023-05-11 17:55:07,793 WARNING [train.py:1182] (0/2) Exclude cut with ID 3340-169293-0021-76797-0_sp0.9 from training. Duration: 21.1445 2023-05-11 17:55:15,149 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder_embed.convnext.layerdrop_rate, batch_count=716430.0, ans=0.015 2023-05-11 17:55:39,335 INFO [train.py:1021] (0/2) Epoch 40, batch 1500, loss[loss=0.1448, simple_loss=0.231, pruned_loss=0.02933, over 35336.00 frames. ], tot_loss[loss=0.1608, simple_loss=0.2542, pruned_loss=0.03374, over 7213770.35 frames. ], batch size: 78, lr: 2.73e-03, grad_scale: 16.0 2023-05-11 17:55:55,614 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.ff3_skip_rate, batch_count=716580.0, ans=0.0 2023-05-11 17:55:57,097 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.2.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 17:56:11,524 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer_ff2.min_abs, batch_count=716630.0, ans=0.1 2023-05-11 17:56:15,418 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.551e+02 3.050e+02 3.620e+02 4.309e+02 9.058e+02, threshold=7.240e+02, percent-clipped=1.0 2023-05-11 17:56:23,736 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.6266, 5.4642, 4.7703, 5.2227], device='cuda:0') 2023-05-11 17:56:24,953 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0079-62383-0_sp0.9 from training. Duration: 33.038875 2023-05-11 17:56:44,757 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.out_combiner.scale_min, batch_count=716730.0, ans=0.2 2023-05-11 17:56:49,012 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.const_attention_rate, batch_count=716730.0, ans=0.025 2023-05-11 17:56:49,421 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=4.01 vs. limit=15.0 2023-05-11 17:56:52,923 INFO [train.py:1021] (0/2) Epoch 40, batch 1550, loss[loss=0.1652, simple_loss=0.2629, pruned_loss=0.03371, over 36935.00 frames. ], tot_loss[loss=0.1602, simple_loss=0.2536, pruned_loss=0.03344, over 7202517.50 frames. ], batch size: 105, lr: 2.73e-03, grad_scale: 16.0 2023-05-11 17:56:53,301 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer1.min_positive, batch_count=716780.0, ans=0.025 2023-05-11 17:57:01,832 WARNING [train.py:1182] (0/2) Exclude cut with ID 6426-64291-0000-16059-0_sp0.9 from training. Duration: 20.0944375 2023-05-11 17:57:03,436 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder_embed.conv.2.prob, batch_count=716780.0, ans=0.125 2023-05-11 17:57:16,874 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0044-62348-0_sp1.1 from training. Duration: 20.4 2023-05-11 17:57:24,880 WARNING [train.py:1182] (0/2) Exclude cut with ID 6330-62851-0022-91297-0 from training. Duration: 20.085 2023-05-11 17:57:35,214 WARNING [train.py:1182] (0/2) Exclude cut with ID 4860-13185-0032-76709-0_sp0.9 from training. Duration: 23.07775 2023-05-11 17:57:47,116 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.0981, 3.4353, 4.6670, 3.3295], device='cuda:0') 2023-05-11 17:57:51,474 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.9124, 3.5154, 3.3234, 4.1753, 2.7491, 3.6710, 4.2040, 3.6708], device='cuda:0') 2023-05-11 17:57:55,826 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.skip_rate, batch_count=716980.0, ans=0.09899494936611666 2023-05-11 17:58:07,845 INFO [train.py:1021] (0/2) Epoch 40, batch 1600, loss[loss=0.1382, simple_loss=0.2249, pruned_loss=0.02579, over 35279.00 frames. ], tot_loss[loss=0.1605, simple_loss=0.2541, pruned_loss=0.03347, over 7194768.28 frames. ], batch size: 78, lr: 2.73e-03, grad_scale: 32.0 2023-05-11 17:58:09,741 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass.skip_rate, batch_count=717030.0, ans=0.04949747468305833 2023-05-11 17:58:19,030 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([4.7532, 4.0390, 2.4196, 2.7911], device='cuda:0') 2023-05-11 17:58:22,968 WARNING [train.py:1182] (0/2) Exclude cut with ID 2929-85685-0044-62348-0_sp0.9 from training. Duration: 24.9333125 2023-05-11 17:58:44,715 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.567e+02 2.935e+02 3.211e+02 3.749e+02 6.344e+02, threshold=6.423e+02, percent-clipped=0.0 2023-05-11 17:58:48,096 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([4.9720, 4.2524, 2.8485, 3.1506], device='cuda:0') 2023-05-11 17:58:50,994 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer2.prob, batch_count=717180.0, ans=0.125 2023-05-11 17:58:51,549 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=2.25 vs. limit=6.0 2023-05-11 17:59:08,585 WARNING [train.py:1182] (0/2) Exclude cut with ID 5118-111612-0016-124680-0_sp0.9 from training. Duration: 20.388875 2023-05-11 17:59:15,124 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0017-62487-0_sp1.1 from training. Duration: 20.3590625 2023-05-11 17:59:22,260 INFO [train.py:1021] (0/2) Epoch 40, batch 1650, loss[loss=0.1577, simple_loss=0.2586, pruned_loss=0.02837, over 36938.00 frames. ], tot_loss[loss=0.1601, simple_loss=0.2537, pruned_loss=0.03324, over 7216155.04 frames. ], batch size: 108, lr: 2.73e-03, grad_scale: 32.0 2023-05-11 17:59:22,602 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module1.balancer1.prob, batch_count=717280.0, ans=0.125 2023-05-11 17:59:39,862 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.attention_skip_rate, batch_count=717330.0, ans=0.0 2023-05-11 17:59:41,340 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=717330.0, ans=0.1 2023-05-11 17:59:42,025 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=192, metric=4.19 vs. limit=15.0 2023-05-11 17:59:42,800 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=717330.0, ans=0.1 2023-05-11 18:00:08,312 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.balancer1.prob, batch_count=717430.0, ans=0.125 2023-05-11 18:00:15,275 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=192, metric=10.53 vs. limit=15.0 2023-05-11 18:00:17,514 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.balancer1.prob, batch_count=717430.0, ans=0.125 2023-05-11 18:00:18,083 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=192, metric=9.46 vs. limit=15.0 2023-05-11 18:00:25,970 WARNING [train.py:1182] (0/2) Exclude cut with ID 3557-8342-0013-54691-0_sp1.1 from training. Duration: 0.836375 2023-05-11 18:00:35,993 INFO [train.py:1021] (0/2) Epoch 40, batch 1700, loss[loss=0.172, simple_loss=0.2616, pruned_loss=0.04119, over 35780.00 frames. ], tot_loss[loss=0.16, simple_loss=0.2533, pruned_loss=0.0333, over 7241252.92 frames. ], batch size: 133, lr: 2.73e-03, grad_scale: 32.0 2023-05-11 18:01:12,056 WARNING [train.py:1182] (0/2) Exclude cut with ID 8565-290391-0049-67394-0_sp0.9 from training. Duration: 21.3166875 2023-05-11 18:01:13,319 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.610e+02 3.184e+02 3.685e+02 4.137e+02 8.879e+02, threshold=7.370e+02, percent-clipped=4.0 2023-05-11 18:01:16,646 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.0210, 3.8310, 3.5663, 3.8276, 3.2351, 2.9629, 3.3344, 2.8922], device='cuda:0') 2023-05-11 18:01:44,091 WARNING [train.py:1182] (0/2) Exclude cut with ID 6533-399-0029-104863-0_sp0.9 from training. Duration: 22.1055625 2023-05-11 18:01:44,677 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=256, metric=4.47 vs. limit=15.0 2023-05-11 18:01:49,886 INFO [train.py:1021] (0/2) Epoch 40, batch 1750, loss[loss=0.1527, simple_loss=0.2338, pruned_loss=0.03579, over 37074.00 frames. ], tot_loss[loss=0.1609, simple_loss=0.2535, pruned_loss=0.03421, over 7263326.23 frames. ], batch size: 88, lr: 2.73e-03, grad_scale: 32.0 2023-05-11 18:01:54,286 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0094-26379-0_sp1.1 from training. Duration: 21.77725 2023-05-11 18:02:05,317 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.attention_skip_rate, batch_count=717830.0, ans=0.0 2023-05-11 18:02:15,172 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0005-134304-0_sp0.9 from training. Duration: 27.8166875 2023-05-11 18:02:21,136 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.0.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 18:02:40,289 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0021-15852-0_sp1.1 from training. Duration: 22.5090625 2023-05-11 18:02:47,590 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0005-134304-0 from training. Duration: 25.035 2023-05-11 18:03:04,358 INFO [train.py:1021] (0/2) Epoch 40, batch 1800, loss[loss=0.1796, simple_loss=0.2745, pruned_loss=0.04235, over 37118.00 frames. ], tot_loss[loss=0.1626, simple_loss=0.2546, pruned_loss=0.03526, over 7238979.19 frames. ], batch size: 107, lr: 2.73e-03, grad_scale: 32.0 2023-05-11 18:03:05,867 WARNING [train.py:1182] (0/2) Exclude cut with ID 774-127930-0014-10412-0_sp1.1 from training. Duration: 0.95 2023-05-11 18:03:07,950 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=11.56 vs. limit=15.0 2023-05-11 18:03:22,473 WARNING [train.py:1182] (0/2) Exclude cut with ID 3033-130750-0096-55598-0_sp0.9 from training. Duration: 0.92225 2023-05-11 18:03:40,919 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.755e+02 3.401e+02 3.782e+02 4.155e+02 7.613e+02, threshold=7.565e+02, percent-clipped=1.0 2023-05-11 18:03:43,385 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.scale_min, batch_count=718130.0, ans=0.2 2023-05-11 18:03:47,458 WARNING [train.py:1182] (0/2) Exclude cut with ID 4511-76322-0006-80011-0 from training. Duration: 21.97 2023-05-11 18:03:52,095 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=718180.0, ans=0.0 2023-05-11 18:04:02,292 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=718230.0, ans=0.125 2023-05-11 18:04:03,709 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=718230.0, ans=0.0 2023-05-11 18:04:08,264 WARNING [train.py:1182] (0/2) Exclude cut with ID 7492-105653-0055-62765-0_sp0.9 from training. Duration: 21.97225 2023-05-11 18:04:09,665 WARNING [train.py:1182] (0/2) Exclude cut with ID 453-131332-0000-47844-0_sp0.9 from training. Duration: 25.3333125 2023-05-11 18:04:18,384 INFO [train.py:1021] (0/2) Epoch 40, batch 1850, loss[loss=0.1898, simple_loss=0.2768, pruned_loss=0.05137, over 36743.00 frames. ], tot_loss[loss=0.1637, simple_loss=0.2549, pruned_loss=0.0363, over 7206517.01 frames. ], batch size: 118, lr: 2.73e-03, grad_scale: 32.0 2023-05-11 18:04:21,232 WARNING [train.py:1182] (0/2) Exclude cut with ID 5172-29468-0015-19128-0_sp0.9 from training. Duration: 21.5055625 2023-05-11 18:04:28,977 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=718280.0, ans=0.1 2023-05-11 18:04:30,120 WARNING [train.py:1182] (0/2) Exclude cut with ID 453-131332-0000-47844-0_sp1.1 from training. Duration: 20.72725 2023-05-11 18:05:02,963 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=718430.0, ans=0.1 2023-05-11 18:05:04,218 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0030-130156-0_sp0.9 from training. Duration: 26.32775 2023-05-11 18:05:32,114 INFO [train.py:1021] (0/2) Epoch 40, batch 1900, loss[loss=0.2047, simple_loss=0.2861, pruned_loss=0.0617, over 24563.00 frames. ], tot_loss[loss=0.1647, simple_loss=0.255, pruned_loss=0.03725, over 7174658.72 frames. ], batch size: 234, lr: 2.73e-03, grad_scale: 32.0 2023-05-11 18:05:35,199 WARNING [train.py:1182] (0/2) Exclude cut with ID 3867-173237-0077-144769-0 from training. Duration: 20.025 2023-05-11 18:05:40,896 WARNING [train.py:1182] (0/2) Exclude cut with ID 6709-74022-0004-86860-0_sp1.1 from training. Duration: 0.9409375 2023-05-11 18:05:40,903 WARNING [train.py:1182] (0/2) Exclude cut with ID 4757-1811-0023-62229-0_sp0.9 from training. Duration: 21.37775 2023-05-11 18:05:54,779 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.ff2_skip_rate, batch_count=718580.0, ans=0.0 2023-05-11 18:05:56,333 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_skip_rate, batch_count=718580.0, ans=0.0 2023-05-11 18:05:58,920 WARNING [train.py:1182] (0/2) Exclude cut with ID 1250-135782-0004-25974-0_sp0.9 from training. Duration: 21.17225 2023-05-11 18:05:58,931 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0021-15852-0_sp0.9 from training. Duration: 27.511125 2023-05-11 18:06:08,913 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.998e+02 3.434e+02 3.777e+02 4.398e+02 7.818e+02, threshold=7.555e+02, percent-clipped=1.0 2023-05-11 18:06:09,348 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 18:06:32,788 WARNING [train.py:1182] (0/2) Exclude cut with ID 453-131332-0000-47844-0 from training. Duration: 22.8 2023-05-11 18:06:37,133 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0040-44509-0 from training. Duration: 22.585 2023-05-11 18:06:46,177 INFO [train.py:1021] (0/2) Epoch 40, batch 1950, loss[loss=0.1761, simple_loss=0.2701, pruned_loss=0.04109, over 37178.00 frames. ], tot_loss[loss=0.1655, simple_loss=0.2553, pruned_loss=0.03786, over 7153085.74 frames. ], batch size: 102, lr: 2.73e-03, grad_scale: 32.0 2023-05-11 18:06:56,614 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([1.7355, 2.8959, 4.3635, 2.8472], device='cuda:0') 2023-05-11 18:07:06,919 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0001-146967-0_sp0.9 from training. Duration: 22.0166875 2023-05-11 18:07:15,670 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass.scale_min, batch_count=718880.0, ans=0.2 2023-05-11 18:07:22,833 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0003-134302-0_sp1.1 from training. Duration: 24.395375 2023-05-11 18:07:28,731 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0017-41203-0_sp0.9 from training. Duration: 27.47775 2023-05-11 18:07:30,880 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.ff3_skip_rate, batch_count=718930.0, ans=0.0 2023-05-11 18:07:33,593 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0017-62487-0_sp0.9 from training. Duration: 24.8833125 2023-05-11 18:07:37,858 WARNING [train.py:1182] (0/2) Exclude cut with ID 6758-72288-0033-108368-0 from training. Duration: 23.39 2023-05-11 18:07:43,615 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0007-12994-0_sp0.9 from training. Duration: 28.72225 2023-05-11 18:07:52,466 WARNING [train.py:1182] (0/2) Exclude cut with ID 585-294811-0110-133686-0_sp0.9 from training. Duration: 20.8944375 2023-05-11 18:08:00,191 INFO [train.py:1021] (0/2) Epoch 40, batch 2000, loss[loss=0.1575, simple_loss=0.2509, pruned_loss=0.03212, over 37190.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2561, pruned_loss=0.03854, over 7129652.85 frames. ], batch size: 102, lr: 2.73e-03, grad_scale: 32.0 2023-05-11 18:08:07,307 WARNING [train.py:1182] (0/2) Exclude cut with ID 5796-66357-0007-116447-0_sp0.9 from training. Duration: 23.8444375 2023-05-11 18:08:29,648 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0007-12994-0 from training. Duration: 25.85 2023-05-11 18:08:29,655 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0023-13010-0 from training. Duration: 21.39 2023-05-11 18:08:32,843 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=719130.0, ans=0.0 2023-05-11 18:08:37,517 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=1.66 vs. limit=6.0 2023-05-11 18:08:38,269 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 3.066e+02 3.556e+02 4.253e+02 4.757e+02 7.009e+02, threshold=8.506e+02, percent-clipped=0.0 2023-05-11 18:08:39,709 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0014-15845-0 from training. Duration: 27.92 2023-05-11 18:08:45,916 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.ff2_skip_rate, batch_count=719180.0, ans=0.0 2023-05-11 18:08:47,484 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.9302, 3.8853, 4.4720, 4.6153], device='cuda:0') 2023-05-11 18:08:52,374 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=719180.0, ans=0.1 2023-05-11 18:09:02,267 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass.scale_min, batch_count=719230.0, ans=0.2 2023-05-11 18:09:03,090 INFO [scaling.py:969] (0/2) Whitening: name=encoder_embed.out_whiten, num_groups=1, num_channels=192, metric=7.06 vs. limit=8.0 2023-05-11 18:09:06,479 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0039-130165-0_sp0.9 from training. Duration: 20.661125 2023-05-11 18:09:14,011 INFO [train.py:1021] (0/2) Epoch 40, batch 2050, loss[loss=0.1704, simple_loss=0.2583, pruned_loss=0.04124, over 37166.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.2562, pruned_loss=0.03915, over 7110230.44 frames. ], batch size: 102, lr: 2.73e-03, grad_scale: 32.0 2023-05-11 18:09:17,936 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.conv_module2.whiten, num_groups=1, num_channels=192, metric=5.47 vs. limit=15.0 2023-05-11 18:09:26,740 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=192, metric=13.47 vs. limit=15.0 2023-05-11 18:09:30,215 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0043-15874-0_sp0.9 from training. Duration: 20.07225 2023-05-11 18:09:38,693 WARNING [train.py:1182] (0/2) Exclude cut with ID 1085-156170-0017-128270-0 from training. Duration: 21.01 2023-05-11 18:09:45,338 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module2.balancer1.prob, batch_count=719380.0, ans=0.125 2023-05-11 18:10:02,467 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=256, metric=10.65 vs. limit=22.5 2023-05-11 18:10:27,418 INFO [train.py:1021] (0/2) Epoch 40, batch 2100, loss[loss=0.1672, simple_loss=0.2559, pruned_loss=0.03924, over 37081.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2568, pruned_loss=0.03964, over 7114175.63 frames. ], batch size: 103, lr: 2.73e-03, grad_scale: 32.0 2023-05-11 18:10:29,665 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=2.83 vs. limit=6.0 2023-05-11 18:10:47,009 WARNING [train.py:1182] (0/2) Exclude cut with ID 2195-150901-0045-59933-0 from training. Duration: 20.65 2023-05-11 18:10:54,778 WARNING [train.py:1182] (0/2) Exclude cut with ID 5796-66357-0007-116447-0 from training. Duration: 21.46 2023-05-11 18:10:57,938 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.4003, 5.1851, 4.6142, 4.9265], device='cuda:0') 2023-05-11 18:11:06,684 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.6063, 4.8814, 5.0896, 4.7541], device='cuda:0') 2023-05-11 18:11:07,760 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.918e+02 3.593e+02 3.997e+02 4.861e+02 8.644e+02, threshold=7.994e+02, percent-clipped=1.0 2023-05-11 18:11:10,972 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer2.prob, batch_count=719680.0, ans=0.125 2023-05-11 18:11:26,549 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=13.80 vs. limit=22.5 2023-05-11 18:11:37,597 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=719730.0, ans=0.1 2023-05-11 18:11:37,942 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=4.70 vs. limit=15.0 2023-05-11 18:11:40,562 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer2.min_positive, batch_count=719780.0, ans=0.05 2023-05-11 18:11:41,638 INFO [train.py:1021] (0/2) Epoch 40, batch 2150, loss[loss=0.1511, simple_loss=0.2366, pruned_loss=0.03282, over 37168.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2559, pruned_loss=0.0394, over 7123385.80 frames. ], batch size: 93, lr: 2.73e-03, grad_scale: 16.0 2023-05-11 18:11:41,714 WARNING [train.py:1182] (0/2) Exclude cut with ID 3557-8342-0013-54691-0 from training. Duration: 0.92 2023-05-11 18:11:49,468 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0023-13010-0_sp0.9 from training. Duration: 23.7666875 2023-05-11 18:11:49,804 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer2.prob, batch_count=719780.0, ans=0.125 2023-05-11 18:12:01,411 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass.skip_rate, batch_count=719830.0, ans=0.07 2023-05-11 18:12:13,198 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.out_combiner.scale_min, batch_count=719880.0, ans=0.2 2023-05-11 18:12:26,279 WARNING [train.py:1182] (0/2) Exclude cut with ID 8544-281189-0060-101339-0_sp0.9 from training. Duration: 20.861125 2023-05-11 18:12:32,436 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.attention_skip_rate, batch_count=719930.0, ans=0.0 2023-05-11 18:12:35,648 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-65654-0031-41259-0_sp0.9 from training. Duration: 22.711125 2023-05-11 18:12:41,681 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([3.0549, 4.1650, 4.7128, 4.9391], device='cuda:0') 2023-05-11 18:12:44,745 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.whiten, num_groups=1, num_channels=256, metric=4.59 vs. limit=12.0 2023-05-11 18:12:45,732 INFO [checkpoint.py:75] (0/2) Saving checkpoint to pruned_transducer_stateless7/exp1119-smaller-md1500/checkpoint-144000.pt 2023-05-11 18:12:47,012 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=719980.0, ans=0.1 2023-05-11 18:12:48,482 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=719980.0, ans=0.125 2023-05-11 18:12:56,749 INFO [train.py:1021] (0/2) Epoch 40, batch 2200, loss[loss=0.1716, simple_loss=0.2617, pruned_loss=0.04071, over 37027.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2558, pruned_loss=0.03943, over 7133702.44 frames. ], batch size: 116, lr: 2.73e-03, grad_scale: 16.0 2023-05-11 18:13:04,073 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=720030.0, ans=0.125 2023-05-11 18:13:13,200 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.const_attention_rate, batch_count=720080.0, ans=0.025 2023-05-11 18:13:18,610 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0043-132310-0_sp1.1 from training. Duration: 22.986375 2023-05-11 18:13:31,076 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=720130.0, ans=0.1 2023-05-11 18:13:35,186 WARNING [train.py:1182] (0/2) Exclude cut with ID 8040-260924-0003-80960-0_sp0.9 from training. Duration: 22.07225 2023-05-11 18:13:36,527 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 3.107e+02 3.524e+02 3.928e+02 4.594e+02 5.812e+02, threshold=7.856e+02, percent-clipped=0.0 2023-05-11 18:13:38,136 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0045-26330-0_sp0.9 from training. Duration: 20.3055625 2023-05-11 18:13:41,017 WARNING [train.py:1182] (0/2) Exclude cut with ID 6356-271890-0060-94317-0_sp0.9 from training. Duration: 20.72225 2023-05-11 18:13:51,211 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.0688, 4.1741, 3.8305, 4.1724, 3.4579, 3.2933, 3.6326, 3.1901], device='cuda:0') 2023-05-11 18:13:58,736 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass.scale_min, batch_count=720230.0, ans=0.2 2023-05-11 18:14:01,968 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0017-41203-0_sp1.1 from training. Duration: 22.4818125 2023-05-11 18:14:05,133 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=720230.0, ans=0.1 2023-05-11 18:14:10,525 INFO [train.py:1021] (0/2) Epoch 40, batch 2250, loss[loss=0.171, simple_loss=0.2633, pruned_loss=0.03938, over 37013.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.2553, pruned_loss=0.03959, over 7125584.09 frames. ], batch size: 104, lr: 2.73e-03, grad_scale: 16.0 2023-05-11 18:14:17,047 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.whiten1.whitening_limit, batch_count=720280.0, ans=10.0 2023-05-11 18:14:20,081 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.balancer_na.min_abs, batch_count=720280.0, ans=0.02 2023-05-11 18:14:29,831 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0040-44509-0_sp0.9 from training. Duration: 25.0944375 2023-05-11 18:14:34,110 WARNING [train.py:1182] (0/2) Exclude cut with ID 6533-399-0047-104881-0 from training. Duration: 21.515 2023-05-11 18:14:39,828 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0009-15840-0_sp0.9 from training. Duration: 27.02225 2023-05-11 18:14:40,480 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.whiten, num_groups=1, num_channels=256, metric=2.98 vs. limit=12.0 2023-05-11 18:14:44,186 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0010-62480-0_sp0.9 from training. Duration: 22.22225 2023-05-11 18:14:51,527 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0085-44554-0_sp0.9 from training. Duration: 20.85 2023-05-11 18:14:55,459 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=3.64 vs. limit=15.0 2023-05-11 18:15:14,312 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer2.prob, batch_count=720480.0, ans=0.125 2023-05-11 18:15:24,199 INFO [train.py:1021] (0/2) Epoch 40, batch 2300, loss[loss=0.1695, simple_loss=0.2642, pruned_loss=0.03735, over 37074.00 frames. ], tot_loss[loss=0.167, simple_loss=0.2551, pruned_loss=0.03947, over 7141316.91 frames. ], batch size: 103, lr: 2.73e-03, grad_scale: 16.0 2023-05-11 18:15:25,741 WARNING [train.py:1182] (0/2) Exclude cut with ID 4295-39940-0007-92567-0 from training. Duration: 21.54 2023-05-11 18:15:28,907 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=720530.0, ans=0.125 2023-05-11 18:15:30,159 WARNING [train.py:1182] (0/2) Exclude cut with ID 4964-30587-0040-44509-0_sp1.1 from training. Duration: 20.5318125 2023-05-11 18:15:31,850 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff3_skip_rate, batch_count=720530.0, ans=0.0 2023-05-11 18:15:40,565 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0012-134311-0_sp0.9 from training. Duration: 21.9333125 2023-05-11 18:15:55,852 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer1.prob, batch_count=720630.0, ans=0.125 2023-05-11 18:16:05,072 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 3.140e+02 3.533e+02 3.981e+02 4.673e+02 6.890e+02, threshold=7.961e+02, percent-clipped=0.0 2023-05-11 18:16:29,811 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0025-130151-0_sp0.9 from training. Duration: 21.7944375 2023-05-11 18:16:34,495 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer1.prob, batch_count=720730.0, ans=0.125 2023-05-11 18:16:37,973 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2023-05-11 18:16:38,970 INFO [train.py:1021] (0/2) Epoch 40, batch 2350, loss[loss=0.1628, simple_loss=0.2537, pruned_loss=0.03592, over 37157.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2553, pruned_loss=0.0396, over 7140703.44 frames. ], batch size: 102, lr: 2.73e-03, grad_scale: 16.0 2023-05-11 18:16:42,062 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0002-12989-0_sp0.9 from training. Duration: 22.4666875 2023-05-11 18:16:50,746 WARNING [train.py:1182] (0/2) Exclude cut with ID 6121-9014-0076-24124-0 from training. Duration: 21.635 2023-05-11 18:16:55,701 WARNING [train.py:1182] (0/2) Exclude cut with ID 6121-9014-0076-24124-0_sp0.9 from training. Duration: 24.038875 2023-05-11 18:17:14,718 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module2.balancer1.prob, batch_count=720880.0, ans=0.125 2023-05-11 18:17:17,454 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module2.balancer2.prob, batch_count=720880.0, ans=0.125 2023-05-11 18:17:20,441 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.ff2_skip_rate, batch_count=720880.0, ans=0.0 2023-05-11 18:17:25,611 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.self_attn1.whiten, num_groups=1, num_channels=192, metric=10.61 vs. limit=22.5 2023-05-11 18:17:28,951 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.attention_skip_rate, batch_count=720930.0, ans=0.0 2023-05-11 18:17:41,257 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0009-134308-0_sp1.1 from training. Duration: 21.786375 2023-05-11 18:17:46,618 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=720980.0, ans=0.125 2023-05-11 18:17:53,843 INFO [train.py:1021] (0/2) Epoch 40, batch 2400, loss[loss=0.1802, simple_loss=0.27, pruned_loss=0.04516, over 36814.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2548, pruned_loss=0.03971, over 7153204.38 frames. ], batch size: 113, lr: 2.73e-03, grad_scale: 32.0 2023-05-11 18:17:55,364 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0002-12989-0 from training. Duration: 20.22 2023-05-11 18:18:11,482 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=721080.0, ans=0.1 2023-05-11 18:18:23,192 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module2.balancer2.min_positive, batch_count=721130.0, ans=0.05 2023-05-11 18:18:25,818 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=3.92 vs. limit=6.0 2023-05-11 18:18:32,328 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.attention_skip_rate, batch_count=721130.0, ans=0.0 2023-05-11 18:18:33,543 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 3.013e+02 3.469e+02 3.709e+02 4.363e+02 6.998e+02, threshold=7.417e+02, percent-clipped=0.0 2023-05-11 18:18:37,310 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_skip_rate, batch_count=721180.0, ans=0.0 2023-05-11 18:18:53,893 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=4.02 vs. limit=15.0 2023-05-11 18:19:02,252 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([3.1611, 3.8564, 4.4745, 4.6236], device='cuda:0') 2023-05-11 18:19:07,679 INFO [train.py:1021] (0/2) Epoch 40, batch 2450, loss[loss=0.156, simple_loss=0.2383, pruned_loss=0.03685, over 37013.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2547, pruned_loss=0.03972, over 7116618.37 frames. ], batch size: 91, lr: 2.72e-03, grad_scale: 32.0 2023-05-11 18:19:09,494 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=721280.0, ans=0.1 2023-05-11 18:19:15,300 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.5518, 5.3438, 4.7506, 5.1514], device='cuda:0') 2023-05-11 18:19:24,477 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.balancer1.prob, batch_count=721330.0, ans=0.125 2023-05-11 18:19:31,057 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([4.8894, 4.2500, 2.7541, 2.8332], device='cuda:0') 2023-05-11 18:19:44,424 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=256, metric=8.11 vs. limit=22.5 2023-05-11 18:19:51,865 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.1.whiten, num_groups=1, num_channels=192, metric=3.81 vs. limit=12.0 2023-05-11 18:19:52,779 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass_mid.scale_min, batch_count=721430.0, ans=0.2 2023-05-11 18:19:55,386 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0043-132310-0 from training. Duration: 25.285 2023-05-11 18:20:05,860 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.8449, 5.0907, 5.1503, 5.7256], device='cuda:0') 2023-05-11 18:20:22,654 INFO [train.py:1021] (0/2) Epoch 40, batch 2500, loss[loss=0.1813, simple_loss=0.2698, pruned_loss=0.04639, over 35798.00 frames. ], tot_loss[loss=0.1668, simple_loss=0.2543, pruned_loss=0.03961, over 7134178.02 frames. ], batch size: 133, lr: 2.72e-03, grad_scale: 32.0 2023-05-11 18:20:40,664 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.3331, 4.4616, 2.4497, 2.4728], device='cuda:0') 2023-05-11 18:20:57,782 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=721630.0, ans=0.1 2023-05-11 18:21:00,451 WARNING [train.py:1182] (0/2) Exclude cut with ID 811-130148-0001-63453-0_sp0.9 from training. Duration: 20.861125 2023-05-11 18:21:01,802 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.762e+02 3.567e+02 3.955e+02 4.507e+02 6.071e+02, threshold=7.909e+02, percent-clipped=0.0 2023-05-11 18:21:09,065 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.bypass_mid.scale_min, batch_count=721680.0, ans=0.2 2023-05-11 18:21:16,298 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_skip_rate, batch_count=721680.0, ans=0.0 2023-05-11 18:21:16,768 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=6.44 vs. limit=15.0 2023-05-11 18:21:23,214 WARNING [train.py:1182] (0/2) Exclude cut with ID 6010-56788-0055-90261-0 from training. Duration: 20.88 2023-05-11 18:21:36,438 INFO [train.py:1021] (0/2) Epoch 40, batch 2550, loss[loss=0.1667, simple_loss=0.2559, pruned_loss=0.03873, over 37093.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2542, pruned_loss=0.03955, over 7118014.66 frames. ], batch size: 103, lr: 2.72e-03, grad_scale: 32.0 2023-05-11 18:21:55,925 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0045-15876-0_sp0.9 from training. Duration: 23.4166875 2023-05-11 18:22:15,524 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.bypass_mid.scale_min, batch_count=721880.0, ans=0.2 2023-05-11 18:22:20,033 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass.scale_min, batch_count=721930.0, ans=0.2 2023-05-11 18:22:37,276 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer2.prob, batch_count=721980.0, ans=0.125 2023-05-11 18:22:51,193 INFO [train.py:1021] (0/2) Epoch 40, batch 2600, loss[loss=0.1526, simple_loss=0.2366, pruned_loss=0.03433, over 37067.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2544, pruned_loss=0.03942, over 7141756.99 frames. ], batch size: 94, lr: 2.72e-03, grad_scale: 32.0 2023-05-11 18:23:13,080 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0006-134305-0 from training. Duration: 21.24 2023-05-11 18:23:14,434 WARNING [train.py:1182] (0/2) Exclude cut with ID 6533-399-0047-104881-0_sp0.9 from training. Duration: 23.9055625 2023-05-11 18:23:27,016 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward3.out_whiten.whitening_limit, batch_count=722130.0, ans=15.0 2023-05-11 18:23:30,374 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.870e+02 3.474e+02 3.936e+02 4.539e+02 7.691e+02, threshold=7.872e+02, percent-clipped=0.0 2023-05-11 18:23:33,728 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer2.prob, batch_count=722180.0, ans=0.125 2023-05-11 18:23:43,363 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_skip_rate, batch_count=722180.0, ans=0.0 2023-05-11 18:23:47,244 WARNING [train.py:1182] (0/2) Exclude cut with ID 6758-72288-0033-108368-0_sp0.9 from training. Duration: 25.988875 2023-05-11 18:23:54,536 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0001-134300-0_sp0.9 from training. Duration: 20.67225 2023-05-11 18:24:02,147 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=722230.0, ans=0.1 2023-05-11 18:24:02,154 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=722230.0, ans=0.125 2023-05-11 18:24:04,735 INFO [train.py:1021] (0/2) Epoch 40, batch 2650, loss[loss=0.1562, simple_loss=0.2422, pruned_loss=0.03506, over 36852.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2541, pruned_loss=0.03957, over 7122490.99 frames. ], batch size: 96, lr: 2.72e-03, grad_scale: 32.0 2023-05-11 18:24:44,878 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0038-41224-0 from training. Duration: 20.34 2023-05-11 18:24:50,687 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass.skip_rate, batch_count=722430.0, ans=0.035 2023-05-11 18:24:56,760 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([6.1317, 5.3341, 5.4653, 6.0136], device='cuda:0') 2023-05-11 18:25:17,957 INFO [train.py:1021] (0/2) Epoch 40, batch 2700, loss[loss=0.1472, simple_loss=0.2306, pruned_loss=0.03191, over 36779.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.254, pruned_loss=0.03959, over 7133621.72 frames. ], batch size: 89, lr: 2.72e-03, grad_scale: 32.0 2023-05-11 18:25:25,411 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.attention_skip_rate, batch_count=722530.0, ans=0.0 2023-05-11 18:25:31,222 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=722530.0, ans=0.125 2023-05-11 18:25:35,364 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer1.prob, batch_count=722580.0, ans=0.125 2023-05-11 18:25:42,771 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.2850, 3.9117, 3.6387, 3.9004, 3.2906, 2.9559, 3.4110, 2.9117], device='cuda:0') 2023-05-11 18:25:44,350 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=256, metric=3.43 vs. limit=15.0 2023-05-11 18:25:49,206 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.self_attn1.whiten, num_groups=1, num_channels=192, metric=10.74 vs. limit=22.5 2023-05-11 18:25:58,082 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 3.051e+02 3.609e+02 3.943e+02 4.589e+02 6.554e+02, threshold=7.887e+02, percent-clipped=0.0 2023-05-11 18:25:58,189 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0026-15857-0_sp0.9 from training. Duration: 25.061125 2023-05-11 18:26:09,572 WARNING [train.py:1182] (0/2) Exclude cut with ID 3033-130750-0096-55598-0 from training. Duration: 0.83 2023-05-11 18:26:09,858 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=722680.0, ans=0.125 2023-05-11 18:26:22,308 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass_mid.scale_min, batch_count=722730.0, ans=0.2 2023-05-11 18:26:28,373 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=722730.0, ans=0.125 2023-05-11 18:26:28,386 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff2_skip_rate, batch_count=722730.0, ans=0.0 2023-05-11 18:26:32,347 INFO [train.py:1021] (0/2) Epoch 40, batch 2750, loss[loss=0.1668, simple_loss=0.2547, pruned_loss=0.03948, over 32435.00 frames. ], tot_loss[loss=0.1667, simple_loss=0.2543, pruned_loss=0.03954, over 7145438.34 frames. ], batch size: 170, lr: 2.72e-03, grad_scale: 32.0 2023-05-11 18:26:38,277 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0017-41203-0 from training. Duration: 24.73 2023-05-11 18:26:50,041 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0009-134308-0 from training. Duration: 23.965 2023-05-11 18:26:50,341 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.attention_skip_rate, batch_count=722830.0, ans=0.0 2023-05-11 18:26:58,583 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0030-146996-0_sp0.9 from training. Duration: 22.088875 2023-05-11 18:27:17,177 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0006-134305-0_sp0.9 from training. Duration: 23.6 2023-05-11 18:27:18,874 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.attention_skip_rate, batch_count=722930.0, ans=0.0 2023-05-11 18:27:27,241 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.bypass_mid.scale_min, batch_count=722930.0, ans=0.2 2023-05-11 18:27:46,170 INFO [train.py:1021] (0/2) Epoch 40, batch 2800, loss[loss=0.1487, simple_loss=0.2286, pruned_loss=0.03442, over 35425.00 frames. ], tot_loss[loss=0.1665, simple_loss=0.2543, pruned_loss=0.03939, over 7158642.80 frames. ], batch size: 78, lr: 2.72e-03, grad_scale: 32.0 2023-05-11 18:28:16,535 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.out_combiner.scale_min, batch_count=723130.0, ans=0.2 2023-05-11 18:28:21,590 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=256, metric=18.29 vs. limit=22.5 2023-05-11 18:28:25,476 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.0.self_attn_weights, attn_weights_entropy = tensor([6.1702, 5.5226, 5.3265, 5.9316], device='cuda:0') 2023-05-11 18:28:26,581 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.822e+02 3.439e+02 3.804e+02 4.180e+02 6.455e+02, threshold=7.607e+02, percent-clipped=0.0 2023-05-11 18:28:54,307 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.6702, 5.4650, 4.8747, 5.2905], device='cuda:0') 2023-05-11 18:28:59,757 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0024-13011-0 from training. Duration: 23.795 2023-05-11 18:29:01,142 INFO [train.py:1021] (0/2) Epoch 40, batch 2850, loss[loss=0.1387, simple_loss=0.2209, pruned_loss=0.02831, over 36817.00 frames. ], tot_loss[loss=0.1659, simple_loss=0.2535, pruned_loss=0.03917, over 7159384.37 frames. ], batch size: 84, lr: 2.72e-03, grad_scale: 32.0 2023-05-11 18:29:02,925 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([4.9524, 4.2777, 4.4669, 4.2451], device='cuda:0') 2023-05-11 18:29:02,975 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module2.balancer1.prob, batch_count=723280.0, ans=0.125 2023-05-11 18:29:04,530 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.0025, 4.3541, 3.2693, 3.1378], device='cuda:0') 2023-05-11 18:29:15,701 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0030-130156-0_sp1.1 from training. Duration: 21.5409375 2023-05-11 18:29:17,215 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0019-146985-0_sp0.9 from training. Duration: 24.97775 2023-05-11 18:29:28,553 WARNING [train.py:1182] (0/2) Exclude cut with ID 1085-156170-0017-128270-0_sp0.9 from training. Duration: 23.3444375 2023-05-11 18:29:37,379 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.5910, 4.8962, 5.0597, 4.7524], device='cuda:0') 2023-05-11 18:29:39,784 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=3.18 vs. limit=10.0 2023-05-11 18:29:54,605 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass_mid.scale_min, batch_count=723430.0, ans=0.2 2023-05-11 18:29:55,062 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=256, metric=11.02 vs. limit=22.5 2023-05-11 18:29:58,562 WARNING [train.py:1182] (0/2) Exclude cut with ID 6010-56788-0055-90261-0_sp0.9 from training. Duration: 23.2 2023-05-11 18:30:05,799 WARNING [train.py:1182] (0/2) Exclude cut with ID 5653-46179-0060-117930-0_sp0.9 from training. Duration: 21.17225 2023-05-11 18:30:14,315 INFO [train.py:1021] (0/2) Epoch 40, batch 2900, loss[loss=0.1679, simple_loss=0.2647, pruned_loss=0.03555, over 37007.00 frames. ], tot_loss[loss=0.1662, simple_loss=0.2539, pruned_loss=0.03929, over 7139346.54 frames. ], batch size: 104, lr: 2.72e-03, grad_scale: 32.0 2023-05-11 18:30:17,582 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.5997, 3.7350, 4.0374, 3.7026], device='cuda:0') 2023-05-11 18:30:20,284 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=723530.0, ans=0.125 2023-05-11 18:30:25,894 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0008-134307-0_sp0.9 from training. Duration: 24.6555625 2023-05-11 18:30:54,430 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.830e+02 3.397e+02 3.661e+02 4.191e+02 5.595e+02, threshold=7.321e+02, percent-clipped=0.0 2023-05-11 18:31:24,200 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-65654-0031-41259-0 from training. Duration: 20.44 2023-05-11 18:31:28,444 INFO [train.py:1021] (0/2) Epoch 40, batch 2950, loss[loss=0.1893, simple_loss=0.2794, pruned_loss=0.04963, over 36421.00 frames. ], tot_loss[loss=0.1656, simple_loss=0.2533, pruned_loss=0.03895, over 7165717.63 frames. ], batch size: 126, lr: 2.72e-03, grad_scale: 32.0 2023-05-11 18:31:28,760 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=723780.0, ans=0.125 2023-05-11 18:31:35,452 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=7.48 vs. limit=15.0 2023-05-11 18:31:37,577 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0018-132285-0_sp0.9 from training. Duration: 23.45 2023-05-11 18:31:53,739 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.const_attention_rate, batch_count=723830.0, ans=0.025 2023-05-11 18:31:57,980 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.const_attention_rate, batch_count=723880.0, ans=0.025 2023-05-11 18:31:59,483 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module1.balancer1.prob, batch_count=723880.0, ans=0.125 2023-05-11 18:32:00,988 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.0325, 4.2615, 4.5317, 4.5960], device='cuda:0') 2023-05-11 18:32:05,056 WARNING [train.py:1182] (0/2) Exclude cut with ID 6945-60535-0076-12784-0_sp0.9 from training. Duration: 20.52225 2023-05-11 18:32:06,791 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.2.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([4.3598, 3.7634, 2.5023, 2.8507], device='cuda:0') 2023-05-11 18:32:11,520 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module2.balancer1.prob, batch_count=723930.0, ans=0.125 2023-05-11 18:32:14,150 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0008-134307-0 from training. Duration: 22.19 2023-05-11 18:32:20,248 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_skip_rate, batch_count=723930.0, ans=0.0 2023-05-11 18:32:24,981 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0014-15845-0_sp1.1 from training. Duration: 25.3818125 2023-05-11 18:32:26,702 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass.scale_min, batch_count=723980.0, ans=0.2 2023-05-11 18:32:36,968 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([1.9268, 2.4489, 3.4480, 2.6102], device='cuda:0') 2023-05-11 18:32:38,298 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=723980.0, ans=0.1 2023-05-11 18:32:42,376 INFO [train.py:1021] (0/2) Epoch 40, batch 3000, loss[loss=0.2003, simple_loss=0.2843, pruned_loss=0.05815, over 24198.00 frames. ], tot_loss[loss=0.1657, simple_loss=0.2536, pruned_loss=0.0389, over 7134011.99 frames. ], batch size: 233, lr: 2.72e-03, grad_scale: 32.0 2023-05-11 18:32:42,376 INFO [train.py:1048] (0/2) Computing validation loss 2023-05-11 18:32:55,479 INFO [train.py:1057] (0/2) Epoch 40, validation: loss=0.1515, simple_loss=0.2513, pruned_loss=0.02583, over 944034.00 frames. 2023-05-11 18:32:55,479 INFO [train.py:1058] (0/2) Maximum memory allocated so far is 18788MB 2023-05-11 18:32:55,543 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0043-132310-0_sp0.9 from training. Duration: 28.0944375 2023-05-11 18:33:01,865 WARNING [train.py:1182] (0/2) Exclude cut with ID 2195-150901-0045-59933-0_sp0.9 from training. Duration: 22.9444375 2023-05-11 18:33:09,711 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0024-13011-0_sp1.1 from training. Duration: 21.6318125 2023-05-11 18:33:19,016 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass.skip_rate, batch_count=724080.0, ans=0.04949747468305833 2023-05-11 18:33:27,433 WARNING [train.py:1182] (0/2) Exclude cut with ID 8631-249866-0030-130156-0 from training. Duration: 23.695 2023-05-11 18:33:35,929 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.823e+02 3.423e+02 3.839e+02 4.352e+02 5.918e+02, threshold=7.679e+02, percent-clipped=0.0 2023-05-11 18:33:55,379 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0094-26379-0 from training. Duration: 23.955 2023-05-11 18:34:04,793 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass_mid.scale_min, batch_count=724230.0, ans=0.2 2023-05-11 18:34:10,327 INFO [train.py:1021] (0/2) Epoch 40, batch 3050, loss[loss=0.1422, simple_loss=0.225, pruned_loss=0.02972, over 36807.00 frames. ], tot_loss[loss=0.1658, simple_loss=0.2535, pruned_loss=0.03898, over 7138109.12 frames. ], batch size: 89, lr: 2.72e-03, grad_scale: 32.0 2023-05-11 18:34:13,409 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.ff3_skip_rate, batch_count=724280.0, ans=0.0 2023-05-11 18:34:16,414 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.balancer1.prob, batch_count=724280.0, ans=0.125 2023-05-11 18:34:30,722 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0024-13011-0_sp0.9 from training. Duration: 26.438875 2023-05-11 18:34:38,203 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=724380.0, ans=0.1 2023-05-11 18:35:00,852 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.balancer2.prob, batch_count=724430.0, ans=0.125 2023-05-11 18:35:03,951 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.3118, 4.4658, 2.2522, 2.4502], device='cuda:0') 2023-05-11 18:35:13,897 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0021-26306-0_sp0.9 from training. Duration: 21.2444375 2023-05-11 18:35:13,937 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0014-15845-0_sp0.9 from training. Duration: 31.02225 2023-05-11 18:35:15,619 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.const_attention_rate, batch_count=724480.0, ans=0.025 2023-05-11 18:35:23,876 INFO [train.py:1021] (0/2) Epoch 40, batch 3100, loss[loss=0.1642, simple_loss=0.2587, pruned_loss=0.03481, over 37102.00 frames. ], tot_loss[loss=0.1655, simple_loss=0.2534, pruned_loss=0.03884, over 7129683.13 frames. ], batch size: 107, lr: 2.72e-03, grad_scale: 32.0 2023-05-11 18:35:23,976 WARNING [train.py:1182] (0/2) Exclude cut with ID 432-122774-0017-62487-0 from training. Duration: 22.395 2023-05-11 18:35:25,588 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.attention_skip_rate, batch_count=724530.0, ans=0.0 2023-05-11 18:35:40,180 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0045-15876-0 from training. Duration: 21.075 2023-05-11 18:35:46,482 WARNING [train.py:1182] (0/2) Exclude cut with ID 6482-98857-0025-147532-0_sp0.9 from training. Duration: 20.0055625 2023-05-11 18:35:46,489 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0037-132304-0_sp0.9 from training. Duration: 22.05 2023-05-11 18:35:47,967 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0003-134302-0 from training. Duration: 26.8349375 2023-05-11 18:35:50,905 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0009-15840-0_sp1.1 from training. Duration: 22.1090625 2023-05-11 18:35:58,014 WARNING [train.py:1182] (0/2) Exclude cut with ID 7699-105389-0094-26379-0_sp0.9 from training. Duration: 26.6166875 2023-05-11 18:36:03,739 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.955e+02 3.428e+02 3.880e+02 4.527e+02 7.270e+02, threshold=7.761e+02, percent-clipped=0.0 2023-05-11 18:36:12,837 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=724680.0, ans=0.125 2023-05-11 18:36:15,478 WARNING [train.py:1182] (0/2) Exclude cut with ID 2046-178027-0000-53705-0_sp0.9 from training. Duration: 20.3055625 2023-05-11 18:36:17,116 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=724680.0, ans=0.1 2023-05-11 18:36:25,179 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=724730.0, ans=0.1 2023-05-11 18:36:32,382 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_skip_rate, batch_count=724730.0, ans=0.0 2023-05-11 18:36:38,462 INFO [train.py:1021] (0/2) Epoch 40, batch 3150, loss[loss=0.178, simple_loss=0.2697, pruned_loss=0.04317, over 32003.00 frames. ], tot_loss[loss=0.1657, simple_loss=0.2538, pruned_loss=0.03886, over 7136209.54 frames. ], batch size: 170, lr: 2.72e-03, grad_scale: 32.0 2023-05-11 18:36:38,548 WARNING [train.py:1182] (0/2) Exclude cut with ID 7205-50138-0008-5373-0_sp0.9 from training. Duration: 20.7 2023-05-11 18:36:54,585 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer1.prob, batch_count=724830.0, ans=0.125 2023-05-11 18:36:58,640 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.attention_skip_rate, batch_count=724830.0, ans=0.0 2023-05-11 18:36:58,686 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.ff3_skip_rate, batch_count=724830.0, ans=0.0 2023-05-11 18:37:22,389 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0019-146985-0 from training. Duration: 22.48 2023-05-11 18:37:38,809 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0003-134302-0_sp0.9 from training. Duration: 29.816625 2023-05-11 18:37:39,012 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.out_combiner.scale_min, batch_count=724980.0, ans=0.2 2023-05-11 18:37:42,034 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.attention_skip_rate, batch_count=724980.0, ans=0.0 2023-05-11 18:37:51,953 INFO [train.py:1021] (0/2) Epoch 40, batch 3200, loss[loss=0.1786, simple_loss=0.2741, pruned_loss=0.04156, over 36940.00 frames. ], tot_loss[loss=0.1661, simple_loss=0.2542, pruned_loss=0.03902, over 7116329.76 frames. ], batch size: 108, lr: 2.72e-03, grad_scale: 32.0 2023-05-11 18:37:52,218 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=725030.0, ans=0.1 2023-05-11 18:37:55,282 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.balancer2.prob, batch_count=725030.0, ans=0.125 2023-05-11 18:37:59,410 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0005-134304-0_sp1.1 from training. Duration: 22.7590625 2023-05-11 18:38:04,500 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0026-15857-0 from training. Duration: 22.555 2023-05-11 18:38:10,292 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder_embed.dropout.p, batch_count=725080.0, ans=0.1 2023-05-11 18:38:16,546 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=256, metric=12.36 vs. limit=15.0 2023-05-11 18:38:22,385 WARNING [train.py:1182] (0/2) Exclude cut with ID 1250-135782-0005-25975-0_sp0.9 from training. Duration: 21.688875 2023-05-11 18:38:25,478 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass_mid.scale_min, batch_count=725130.0, ans=0.2 2023-05-11 18:38:32,178 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.970e+02 3.518e+02 3.962e+02 4.557e+02 7.499e+02, threshold=7.924e+02, percent-clipped=0.0 2023-05-11 18:38:45,469 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module1.balancer1.prob, batch_count=725180.0, ans=0.125 2023-05-11 18:38:57,408 WARNING [train.py:1182] (0/2) Exclude cut with ID 3488-85273-0038-41224-0_sp0.9 from training. Duration: 22.6 2023-05-11 18:39:05,846 INFO [train.py:1021] (0/2) Epoch 40, batch 3250, loss[loss=0.1561, simple_loss=0.2385, pruned_loss=0.03684, over 36834.00 frames. ], tot_loss[loss=0.1659, simple_loss=0.2538, pruned_loss=0.03898, over 7110454.68 frames. ], batch size: 96, lr: 2.72e-03, grad_scale: 32.0 2023-05-11 18:39:22,830 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.5.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([2.8839, 3.9166, 4.4781, 4.6235], device='cuda:0') 2023-05-11 18:39:36,821 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0009-15840-0 from training. Duration: 24.32 2023-05-11 18:39:37,119 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.1.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.5515, 5.3671, 4.7438, 5.1654], device='cuda:0') 2023-05-11 18:39:44,783 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.0.self_attn_weights, attn_weights_entropy = tensor([6.2383, 5.5638, 5.3196, 5.9834], device='cuda:0') 2023-05-11 18:40:19,821 INFO [train.py:1021] (0/2) Epoch 40, batch 3300, loss[loss=0.1666, simple_loss=0.2548, pruned_loss=0.03918, over 37002.00 frames. ], tot_loss[loss=0.1657, simple_loss=0.2536, pruned_loss=0.03886, over 7141396.57 frames. ], batch size: 104, lr: 2.72e-03, grad_scale: 32.0 2023-05-11 18:40:27,637 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.3.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([3.0362, 3.6287, 3.4427, 4.3215, 2.7137, 3.7678, 4.3639, 3.7504], device='cuda:0') 2023-05-11 18:40:34,921 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-276745-0093-13116-0_sp0.9 from training. Duration: 21.061125 2023-05-11 18:40:48,891 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0024-15855-0_sp0.9 from training. Duration: 20.32225 2023-05-11 18:41:00,921 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 3.006e+02 3.626e+02 4.224e+02 5.074e+02 8.196e+02, threshold=8.448e+02, percent-clipped=1.0 2023-05-11 18:41:01,046 WARNING [train.py:1182] (0/2) Exclude cut with ID 3033-130750-0096-55598-0_sp1.1 from training. Duration: 0.7545625 2023-05-11 18:41:18,392 WARNING [train.py:1182] (0/2) Exclude cut with ID 4295-39940-0007-92567-0_sp0.9 from training. Duration: 23.9333125 2023-05-11 18:41:20,655 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.ff3_skip_rate, batch_count=725730.0, ans=0.0 2023-05-11 18:41:33,167 INFO [train.py:1021] (0/2) Epoch 40, batch 3350, loss[loss=0.1558, simple_loss=0.2478, pruned_loss=0.0319, over 36860.00 frames. ], tot_loss[loss=0.1654, simple_loss=0.2533, pruned_loss=0.03875, over 7143976.16 frames. ], batch size: 96, lr: 2.72e-03, grad_scale: 16.0 2023-05-11 18:41:37,836 INFO [scaling.py:1059] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2023-05-11 18:41:48,455 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0008-134307-0_sp1.1 from training. Duration: 20.17275 2023-05-11 18:41:48,612 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward2.hidden_balancer.prob, batch_count=725830.0, ans=0.125 2023-05-11 18:41:54,027 WARNING [train.py:1182] (0/2) Exclude cut with ID 6978-92210-0019-146985-0_sp1.1 from training. Duration: 20.436375 2023-05-11 18:42:28,734 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=256, metric=17.11 vs. limit=22.5 2023-05-11 18:42:37,232 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass.skip_rate, batch_count=725980.0, ans=0.04949747468305833 2023-05-11 18:42:47,346 INFO [train.py:1021] (0/2) Epoch 40, batch 3400, loss[loss=0.199, simple_loss=0.2822, pruned_loss=0.0579, over 24686.00 frames. ], tot_loss[loss=0.166, simple_loss=0.2539, pruned_loss=0.03906, over 7119939.39 frames. ], batch size: 234, lr: 2.72e-03, grad_scale: 16.0 2023-05-11 18:43:05,444 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer2.prob, batch_count=726080.0, ans=0.125 2023-05-11 18:43:12,532 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward2.hidden_balancer.prob, batch_count=726080.0, ans=0.125 2023-05-11 18:43:15,179 WARNING [train.py:1182] (0/2) Exclude cut with ID 4234-40345-0022-142709-0_sp0.9 from training. Duration: 23.1055625 2023-05-11 18:43:18,164 WARNING [train.py:1182] (0/2) Exclude cut with ID 8291-282929-0007-12994-0_sp1.1 from training. Duration: 23.5 2023-05-11 18:43:23,778 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.0.layers.0.conv_module2.whiten, num_groups=1, num_channels=192, metric=6.79 vs. limit=15.0 2023-05-11 18:43:27,397 INFO [zipformer.py:1666] (0/2) name=encoder.encoders.0.layers.0.self_attn_weights, attn_weights_entropy = tensor([6.3865, 5.7652, 5.5584, 6.1599], device='cuda:0') 2023-05-11 18:43:28,571 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.791e+02 3.477e+02 3.786e+02 4.190e+02 5.940e+02, threshold=7.573e+02, percent-clipped=0.0 2023-05-11 18:43:30,474 WARNING [train.py:1182] (0/2) Exclude cut with ID 7255-291500-0009-134308-0_sp0.9 from training. Duration: 26.62775 2023-05-11 18:43:41,814 WARNING [train.py:1182] (0/2) Exclude cut with ID 6951-79737-0018-132285-0 from training. Duration: 21.105 2023-05-11 18:43:45,058 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer1.prob, batch_count=726230.0, ans=0.125 2023-05-11 18:43:46,131 WARNING [train.py:1182] (0/2) Exclude cut with ID 4511-76322-0006-80011-0_sp0.9 from training. Duration: 24.411125 2023-05-11 18:43:57,053 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder_embed.convnext.out_balancer.prob, batch_count=726230.0, ans=0.125 2023-05-11 18:44:01,428 INFO [train.py:1021] (0/2) Epoch 40, batch 3450, loss[loss=0.1397, simple_loss=0.224, pruned_loss=0.02766, over 37066.00 frames. ], tot_loss[loss=0.1654, simple_loss=0.2532, pruned_loss=0.03878, over 7137433.82 frames. ], batch size: 88, lr: 2.72e-03, grad_scale: 16.0 2023-05-11 18:44:16,278 WARNING [train.py:1182] (0/2) Exclude cut with ID 6758-72288-0033-108368-0_sp1.1 from training. Duration: 21.263625 2023-05-11 18:44:27,110 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_skip_rate, batch_count=726330.0, ans=0.0 2023-05-11 18:44:48,960 WARNING [train.py:1182] (0/2) Exclude cut with ID 4234-40345-0022-142709-0 from training. Duration: 20.795 2023-05-11 18:44:58,765 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0021-15852-0 from training. Duration: 24.76 2023-05-11 18:44:58,788 WARNING [train.py:1182] (0/2) Exclude cut with ID 3867-173237-0077-144769-0_sp0.9 from training. Duration: 22.25 2023-05-11 18:45:00,614 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer1.prob, batch_count=726480.0, ans=0.125 2023-05-11 18:45:02,114 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer1.prob, batch_count=726480.0, ans=0.125 2023-05-11 18:45:15,486 INFO [train.py:1021] (0/2) Epoch 40, batch 3500, loss[loss=0.1666, simple_loss=0.2611, pruned_loss=0.03609, over 36824.00 frames. ], tot_loss[loss=0.1652, simple_loss=0.253, pruned_loss=0.03872, over 7144529.27 frames. ], batch size: 111, lr: 2.72e-03, grad_scale: 16.0 2023-05-11 18:45:21,480 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=726530.0, ans=0.0 2023-05-11 18:45:24,286 WARNING [train.py:1182] (0/2) Exclude cut with ID 7357-94126-0026-15857-0_sp1.1 from training. Duration: 20.5045625 2023-05-11 18:45:24,486 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass.skip_rate, batch_count=726530.0, ans=0.07 2023-05-11 18:45:32,255 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.bypass.scale_min, batch_count=726580.0, ans=0.2 2023-05-11 18:45:39,735 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=726580.0, ans=0.1 2023-05-11 18:45:52,731 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.balancer2.prob, batch_count=726630.0, ans=0.125 2023-05-11 18:45:56,638 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 3.013e+02 3.374e+02 3.709e+02 4.153e+02 5.917e+02, threshold=7.418e+02, percent-clipped=0.0 2023-05-11 18:46:01,244 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.attention_skip_rate, batch_count=726680.0, ans=0.0 2023-05-11 18:46:10,092 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer1.prob, batch_count=726680.0, ans=0.125 2023-05-11 18:46:28,637 INFO [train.py:1021] (0/2) Epoch 40, batch 3550, loss[loss=0.1661, simple_loss=0.2527, pruned_loss=0.03974, over 37045.00 frames. ], tot_loss[loss=0.1657, simple_loss=0.2536, pruned_loss=0.03892, over 7140758.17 frames. ], batch size: 99, lr: 2.71e-03, grad_scale: 16.0 2023-05-11 18:47:10,946 INFO [scaling.py:969] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.whiten, num_groups=1, num_channels=256, metric=2.78 vs. limit=12.0 2023-05-11 18:47:31,701 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff3_skip_rate, batch_count=726980.0, ans=0.0 2023-05-11 18:47:39,545 INFO [train.py:1021] (0/2) Epoch 40, batch 3600, loss[loss=0.1542, simple_loss=0.2342, pruned_loss=0.03708, over 37066.00 frames. ], tot_loss[loss=0.1657, simple_loss=0.2534, pruned_loss=0.039, over 7154108.41 frames. ], batch size: 88, lr: 2.71e-03, grad_scale: 32.0 2023-05-11 18:48:01,770 INFO [scaling.py:178] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=727080.0, ans=0.0 2023-05-11 18:48:19,769 INFO [optim.py:478] (0/2) Clipping_scale=2.0, grad-norm quartiles 2.795e+02 3.359e+02 3.556e+02 4.025e+02 5.854e+02, threshold=7.112e+02, percent-clipped=0.0 2023-05-11 18:48:30,142 INFO [checkpoint.py:75] (0/2) Saving checkpoint to pruned_transducer_stateless7/exp1119-smaller-md1500/epoch-40.pt 2023-05-11 18:48:32,009 INFO [train.py:1281] (0/2) Done!