File size: 5,034 Bytes
9f390a4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
2023-03-30 10:00:09,177 INFO [train.py:962] (1/4) Training started
2023-03-30 10:00:09,177 INFO [train.py:972] (1/4) Device: cuda:1
2023-03-30 10:00:09,181 INFO [train.py:981] (1/4) {'frame_shift_ms': 10.0, 'allowed_excess_duration_ratio': 0.1, 'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.23.4', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '9426c9f730820d291f5dcb06be337662595fa7b4', 'k2-git-date': 'Sun Feb 5 17:35:01 2023', 'lhotse-version': '1.13.0.dev+git.4cbd1bde.clean', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'bbpe', 'icefall-git-sha1': 'a7e0d24-dirty', 'icefall-git-date': 'Tue Mar 28 18:53:54 2023', 'icefall-path': '/ceph-kw/kangwei/code/icefall_bbpe', 'k2-path': '/ceph-hw/kangwei/code/k2_release/k2/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-hw/kangwei/dev_tools/anaconda3/envs/rnnt2/lib/python3.8/site-packages/lhotse-1.13.0.dev0+git.4cbd1bde.clean-py3.8.egg/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-9-0208143539-7dcb6bfd79-b6fdq', 'IP address': '10.177.13.150'}, 'world_size': 4, 'master_port': 12535, 'tensorboard': True, 'num_epochs': 50, 'start_epoch': 48, 'start_batch': 0, 'exp_dir': PosixPath('pruned_transducer_stateless7_bbpe/exp'), 'bbpe_model': 'data/lang_bbpe_500/bbpe.model', 'base_lr': 0.05, 'lr_batches': 5000, 'lr_epochs': 3.5, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'seed': 42, 'print_diagnostics': False, 'inf_check': False, 'save_every_n': 2000, 'keep_last_k': 30, 'average_period': 200, 'use_fp16': True, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 800, 'bucketing_sampler': True, 'num_buckets': 300, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'blank_id': 0, 'vocab_size': 500}
2023-03-30 10:00:09,181 INFO [train.py:983] (1/4) About to create model
2023-03-30 10:00:10,219 INFO [zipformer.py:178] (1/4) At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
2023-03-30 10:00:10,245 INFO [train.py:987] (1/4) Number of model parameters: 70369391
2023-03-30 10:00:10,245 INFO [checkpoint.py:112] (1/4) Loading checkpoint from pruned_transducer_stateless7_bbpe/exp/epoch-47.pt
2023-03-30 10:00:28,814 INFO [train.py:1002] (1/4) Using DDP
2023-03-30 10:00:29,385 INFO [train.py:1019] (1/4) Loading optimizer state dict
2023-03-30 10:00:30,575 INFO [train.py:1027] (1/4) Loading scheduler state dict
2023-03-30 10:00:30,575 INFO [asr_datamodule.py:407] (1/4) About to get train cuts
2023-03-30 10:00:30,579 INFO [train.py:1083] (1/4) Filtering short and long utterances.
2023-03-30 10:00:30,579 INFO [train.py:1086] (1/4) Tokenizing and encoding texts in train cuts.
2023-03-30 10:00:30,579 INFO [asr_datamodule.py:224] (1/4) About to get Musan cuts
2023-03-30 10:00:34,318 INFO [asr_datamodule.py:229] (1/4) Enable MUSAN
2023-03-30 10:00:34,318 INFO [asr_datamodule.py:252] (1/4) Enable SpecAugment
2023-03-30 10:00:34,318 INFO [asr_datamodule.py:253] (1/4) Time warp factor: 80
2023-03-30 10:00:34,318 INFO [asr_datamodule.py:263] (1/4) Num frame mask: 10
2023-03-30 10:00:34,319 INFO [asr_datamodule.py:276] (1/4) About to create train dataset
2023-03-30 10:00:34,319 INFO [asr_datamodule.py:303] (1/4) Using DynamicBucketingSampler.
2023-03-30 10:00:46,491 INFO [asr_datamodule.py:320] (1/4) About to create train dataloader
2023-03-30 10:00:46,492 INFO [asr_datamodule.py:414] (1/4) About to get dev cuts
2023-03-30 10:00:46,494 INFO [train.py:1102] (1/4) Tokenizing and encoding texts in valid cuts.
2023-03-30 10:00:46,494 INFO [asr_datamodule.py:351] (1/4) About to create dev dataset
2023-03-30 10:00:47,369 INFO [asr_datamodule.py:370] (1/4) About to create dev dataloader
2023-03-30 10:00:47,370 INFO [train.py:1119] (1/4) Loading grad scaler state dict
2023-03-30 10:01:32,484 INFO [train.py:1188] (1/4) Saving batch to pruned_transducer_stateless7_bbpe/exp/batch-b406dd29-9b57-6d64-c490-5c0914c25b99.pt
2023-03-30 10:01:33,057 INFO [train.py:1194] (1/4) features shape: torch.Size([259, 308, 80])
2023-03-30 10:01:33,065 INFO [train.py:1198] (1/4) num tokens: 4037