marcoyang commited on
Commit
e712f8f
1 Parent(s): b2179c5

update training logs

Browse files
Files changed (32) hide show
  1. train_960h_hubert_large/log/log-train-2022-09-04-08-30-51-0 +0 -0
  2. train_960h_hubert_large/log/log-train-2022-09-04-08-30-51-1 +0 -0
  3. train_960h_hubert_large/log/log-train-2022-09-04-08-30-51-2 +0 -0
  4. train_960h_hubert_large/log/log-train-2022-09-04-08-30-51-3 +0 -0
  5. train_960h_hubert_large/log/log-train-2022-09-04-08-30-51-4 +0 -0
  6. train_960h_hubert_large/log/log-train-2022-09-04-08-30-51-5 +0 -0
  7. train_960h_hubert_large/log/log-train-2022-09-05-13-45-44-0 +0 -0
  8. train_960h_hubert_large/log/log-train-2022-09-05-13-45-44-1 +0 -0
  9. train_960h_hubert_large/log/log-train-2022-09-05-13-45-44-2 +0 -0
  10. train_960h_hubert_large/log/log-train-2022-09-05-13-45-44-3 +0 -0
  11. train_960h_hubert_large/log/log-train-2022-09-05-13-45-44-4 +0 -0
  12. train_960h_hubert_large/log/log-train-2022-09-05-13-45-44-5 +0 -0
  13. train_960h_hubert_large/log/log-train-2022-09-06-22-07-03-0 +0 -0
  14. train_960h_hubert_large/log/log-train-2022-09-06-22-07-03-1 +0 -0
  15. train_960h_hubert_large/log/log-train-2022-09-06-22-07-03-2 +0 -0
  16. train_960h_hubert_large/log/log-train-2022-09-06-22-07-03-3 +0 -0
  17. train_960h_hubert_large/log/log-train-2022-09-06-22-07-03-4 +0 -0
  18. train_960h_hubert_large/log/log-train-2022-09-06-22-07-03-5 +0 -0
  19. train_960h_hubert_large/log/log-train-2022-09-08-08-32-59-0 +0 -0
  20. train_960h_hubert_large/log/log-train-2022-09-08-08-32-59-1 +0 -0
  21. train_960h_hubert_large/log/log-train-2022-09-08-08-32-59-2 +0 -0
  22. train_960h_hubert_large/log/log-train-2022-09-08-08-32-59-3 +0 -0
  23. train_960h_hubert_large/log/log-train-2022-09-08-08-32-59-4 +0 -0
  24. train_960h_hubert_large/log/log-train-2022-09-08-08-32-59-5 +0 -0
  25. train_960h_hubert_large/log/log-train-2022-09-10-14-48-17-0 +10 -0
  26. train_960h_hubert_large/log/log-train-2022-09-10-14-48-17-1 +10 -0
  27. train_960h_hubert_large/log/log-train-2022-09-10-14-48-17-2 +10 -0
  28. train_960h_hubert_large/log/log-train-2022-09-10-14-48-17-3 +10 -0
  29. train_960h_hubert_large/log/log-train-2022-09-10-14-48-17-4 +10 -0
  30. train_960h_hubert_large/log/log-train-2022-09-10-14-48-17-5 +10 -0
  31. train_960h_hubert_large/log/log-train-2022-09-10-14-48-17-6 +10 -0
  32. train_960h_hubert_large/log/log-train-2022-09-10-14-48-17-7 +10 -0
train_960h_hubert_large/log/log-train-2022-09-04-08-30-51-0 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-04-08-30-51-1 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-04-08-30-51-2 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-04-08-30-51-3 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-04-08-30-51-4 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-04-08-30-51-5 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-05-13-45-44-0 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-05-13-45-44-1 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-05-13-45-44-2 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-05-13-45-44-3 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-05-13-45-44-4 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-05-13-45-44-5 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-06-22-07-03-0 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-06-22-07-03-1 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-06-22-07-03-2 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-06-22-07-03-3 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-06-22-07-03-4 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-06-22-07-03-5 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-08-08-32-59-0 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-08-08-32-59-1 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-08-08-32-59-2 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-08-08-32-59-3 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-08-08-32-59-4 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-08-08-32-59-5 ADDED
The diff for this file is too large to render. See raw diff
 
train_960h_hubert_large/log/log-train-2022-09-10-14-48-17-0 ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-09-10 14:48:17,912 INFO [train_hubert.py:997] (0/8) Training started
2
+ 2022-09-10 14:48:17,916 INFO [train_hubert.py:1007] (0/8) Device: cuda:0
3
+ 2022-09-10 14:48:17,926 INFO [train_hubert.py:1016] (0/8) {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'encoder_dim': 1280, 'nhead': 8, 'dim_feedforward': 2048, 'num_encoder_layers': 12, 'decoder_dim': 512, 'joiner_dim': 512, 'model_warm_step': 3000, 'env_info': {'k2-version': '1.18', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '2d82a1d9350263ae48a6953034ce570e3d5208c1', 'k2-git-date': 'Mon Aug 15 02:09:05 2022', 'lhotse-version': '1.5.0', 'torch-version': '1.10.1', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.9', 'icefall-git-branch': 'finetune_hubert', 'icefall-git-sha1': 'bf6c560-dirty', 'icefall-git-date': 'Thu Aug 25 11:33:43 2022', 'icefall-path': '/ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked', 'k2-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_test/lib/python3.9/site-packages/k2/__init__.py', 'lhotse-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_test/lib/python3.9/site-packages/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-6-0701202559-8476c48f5f-xmr4s', 'IP address': '10.177.28.74'}, 'world_size': 8, 'master_port': 12193, 'tensorboard': True, 'num_epochs': 30, 'start_epoch': 1, 'start_batch': 0, 'exp_dir': PosixPath('finetune_hubert_transducer/exp_960h_TSA_freeze10k_normalized_total32k_with_musan'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'initial_lr': 0.003, 'lr_batches': 5000, 'lr_epochs': 6, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'codebook_loss_scale': 0.1, 'seed': 42, 'print_diagnostics': False, 'save_every_n': 8000, 'keep_last_k': 20, 'average_period': 100, 'use_fp16': False, 'enable_distillation': True, 'encoder_type': 'hubert', 'hubert_model_dir': '/ceph-data4/yangxiaoyu/pretrained_models/hubert_xtralarge_ll60k_finetune_ls960.pt', 'hubert_output_size': 1280, 'hubert_freeze_finetune_updates': 10000, 'hubert_mask_prob': 0.25, 'hubert_mask_channel_prob': 0.5, 'hubert_mask_channel_length': 64, 'hubert_subsample_output': True, 'hubert_subsample_mode': 'concat_tanh', 'use_tri_state_optim': True, 'TSA_init_lr': 5e-07, 'TSA_warmup_lr': 3e-05, 'TSA_end_lr': 1.5e-06, 'TSA_total_steps': 320000, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 60, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': False, 'spec_aug_time_warp_factor': -1, 'enable_musan': True, 'input_strategy': 'AudioSamples', 'blank_id': 0, 'vocab_size': 500}
4
+ 2022-09-10 14:48:17,927 INFO [train_hubert.py:1018] (0/8) About to create model
5
+ 2022-09-10 14:48:18,671 INFO [text_to_speech.py:34] (0/8) Please install tensorboardX: pip install tensorboardX
6
+ 2022-09-10 14:49:27,447 INFO [hubert_pretraining.py:116] (0/8) current directory is /ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked/egs/librispeech/ASR
7
+ 2022-09-10 14:49:27,448 INFO [hubert_pretraining.py:117] (0/8) HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': '/checkpoint/abdo/old_checkpoint02/datasets/librispeech/960h/raw_repeated', 'fine_tuning': False, 'labels': ['ltr'], 'label_dir': None, 'label_rate': -1.0, 'sample_rate': 16000, 'normalize': True, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 300000, 'min_sample_size': None, 'single_target': True, 'random_crop': False, 'pad_audio': False}
8
+ 2022-09-10 14:49:27,599 INFO [hubert_pretraining.py:116] (0/8) current directory is /ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked/egs/librispeech/ASR
9
+ 2022-09-10 14:49:27,599 INFO [hubert_pretraining.py:117] (0/8) HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': '/checkpoint/abdo/old_checkpoint02/datasets/librispeech/960h/raw_repeated', 'fine_tuning': False, 'labels': ['lyr9.km500'], 'label_dir': '/checkpoint/wnhsu/experiments/hubert/kmeans_20210121/km_dataset_librivox.model_iter_2.all', 'label_rate': 50.0, 'sample_rate': 16000, 'normalize': True, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 250000, 'min_sample_size': 32000, 'single_target': False, 'random_crop': True, 'pad_audio': False}
10
+ 2022-09-10 14:49:27,611 INFO [hubert.py:250] (0/8) HubertModel Config: {'_name': 'hubert', 'label_rate': 50.0, 'extractor_mode': layer_norm, 'encoder_layers': 48, 'encoder_embed_dim': 1280, 'encoder_ffn_embed_dim': 5120, 'encoder_attention_heads': 16, 'activation_fn': gelu, 'layer_type': transformer, 'dropout': 0.0, 'attention_dropout': 0.0, 'activation_dropout': 0.1, 'encoder_layerdrop': 0.1, 'dropout_input': 0.0, 'dropout_features': 0.0, 'final_dim': 1024, 'untie_final_proj': True, 'layer_norm_first': True, 'conv_feature_layers': '[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2', 'conv_bias': False, 'logit_temp': 0.1, 'target_glu': False, 'feature_grad_mult': 0.0, 'mask_length': 10, 'mask_prob': 0.5, 'mask_selection': static, 'mask_other': 0.0, 'no_mask_overlap': False, 'mask_min_space': 1, 'mask_channel_length': 64, 'mask_channel_prob': 0.25, 'mask_channel_selection': static, 'mask_channel_other': 0.0, 'no_mask_channel_overlap': False, 'mask_channel_min_space': 1, 'conv_pos': 128, 'conv_pos_groups': 16, 'latent_temp': [2.0, 0.5, 0.999995], 'skip_masked': False, 'skip_nomask': True, 'checkpoint_activations': False, 'required_seq_len_multiple': 2, 'depthwise_conv_kernel_size': 31, 'attn_type': '', 'pos_enc_type': 'abs', 'fp16': False}
train_960h_hubert_large/log/log-train-2022-09-10-14-48-17-1 ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-09-10 14:48:17,948 INFO [train_hubert.py:997] (1/8) Training started
2
+ 2022-09-10 14:48:17,948 INFO [train_hubert.py:1007] (1/8) Device: cuda:1
3
+ 2022-09-10 14:48:17,951 INFO [train_hubert.py:1016] (1/8) {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'encoder_dim': 1280, 'nhead': 8, 'dim_feedforward': 2048, 'num_encoder_layers': 12, 'decoder_dim': 512, 'joiner_dim': 512, 'model_warm_step': 3000, 'env_info': {'k2-version': '1.18', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '2d82a1d9350263ae48a6953034ce570e3d5208c1', 'k2-git-date': 'Mon Aug 15 02:09:05 2022', 'lhotse-version': '1.5.0', 'torch-version': '1.10.1', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.9', 'icefall-git-branch': 'finetune_hubert', 'icefall-git-sha1': 'bf6c560-dirty', 'icefall-git-date': 'Thu Aug 25 11:33:43 2022', 'icefall-path': '/ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked', 'k2-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_test/lib/python3.9/site-packages/k2/__init__.py', 'lhotse-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_test/lib/python3.9/site-packages/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-6-0701202559-8476c48f5f-xmr4s', 'IP address': '10.177.28.74'}, 'world_size': 8, 'master_port': 12193, 'tensorboard': True, 'num_epochs': 30, 'start_epoch': 1, 'start_batch': 0, 'exp_dir': PosixPath('finetune_hubert_transducer/exp_960h_TSA_freeze10k_normalized_total32k_with_musan'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'initial_lr': 0.003, 'lr_batches': 5000, 'lr_epochs': 6, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'codebook_loss_scale': 0.1, 'seed': 42, 'print_diagnostics': False, 'save_every_n': 8000, 'keep_last_k': 20, 'average_period': 100, 'use_fp16': False, 'enable_distillation': True, 'encoder_type': 'hubert', 'hubert_model_dir': '/ceph-data4/yangxiaoyu/pretrained_models/hubert_xtralarge_ll60k_finetune_ls960.pt', 'hubert_output_size': 1280, 'hubert_freeze_finetune_updates': 10000, 'hubert_mask_prob': 0.25, 'hubert_mask_channel_prob': 0.5, 'hubert_mask_channel_length': 64, 'hubert_subsample_output': True, 'hubert_subsample_mode': 'concat_tanh', 'use_tri_state_optim': True, 'TSA_init_lr': 5e-07, 'TSA_warmup_lr': 3e-05, 'TSA_end_lr': 1.5e-06, 'TSA_total_steps': 320000, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 60, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': False, 'spec_aug_time_warp_factor': -1, 'enable_musan': True, 'input_strategy': 'AudioSamples', 'blank_id': 0, 'vocab_size': 500}
4
+ 2022-09-10 14:48:17,951 INFO [train_hubert.py:1018] (1/8) About to create model
5
+ 2022-09-10 14:48:18,671 INFO [text_to_speech.py:34] (1/8) Please install tensorboardX: pip install tensorboardX
6
+ 2022-09-10 14:49:27,501 INFO [hubert_pretraining.py:116] (1/8) current directory is /ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked/egs/librispeech/ASR
7
+ 2022-09-10 14:49:27,501 INFO [hubert_pretraining.py:117] (1/8) HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': '/checkpoint/abdo/old_checkpoint02/datasets/librispeech/960h/raw_repeated', 'fine_tuning': False, 'labels': ['ltr'], 'label_dir': None, 'label_rate': -1.0, 'sample_rate': 16000, 'normalize': True, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 300000, 'min_sample_size': None, 'single_target': True, 'random_crop': False, 'pad_audio': False}
8
+ 2022-09-10 14:49:27,613 INFO [hubert_pretraining.py:116] (1/8) current directory is /ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked/egs/librispeech/ASR
9
+ 2022-09-10 14:49:27,613 INFO [hubert_pretraining.py:117] (1/8) HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': '/checkpoint/abdo/old_checkpoint02/datasets/librispeech/960h/raw_repeated', 'fine_tuning': False, 'labels': ['lyr9.km500'], 'label_dir': '/checkpoint/wnhsu/experiments/hubert/kmeans_20210121/km_dataset_librivox.model_iter_2.all', 'label_rate': 50.0, 'sample_rate': 16000, 'normalize': True, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 250000, 'min_sample_size': 32000, 'single_target': False, 'random_crop': True, 'pad_audio': False}
10
+ 2022-09-10 14:49:27,630 INFO [hubert.py:250] (1/8) HubertModel Config: {'_name': 'hubert', 'label_rate': 50.0, 'extractor_mode': layer_norm, 'encoder_layers': 48, 'encoder_embed_dim': 1280, 'encoder_ffn_embed_dim': 5120, 'encoder_attention_heads': 16, 'activation_fn': gelu, 'layer_type': transformer, 'dropout': 0.0, 'attention_dropout': 0.0, 'activation_dropout': 0.1, 'encoder_layerdrop': 0.1, 'dropout_input': 0.0, 'dropout_features': 0.0, 'final_dim': 1024, 'untie_final_proj': True, 'layer_norm_first': True, 'conv_feature_layers': '[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2', 'conv_bias': False, 'logit_temp': 0.1, 'target_glu': False, 'feature_grad_mult': 0.0, 'mask_length': 10, 'mask_prob': 0.5, 'mask_selection': static, 'mask_other': 0.0, 'no_mask_overlap': False, 'mask_min_space': 1, 'mask_channel_length': 64, 'mask_channel_prob': 0.25, 'mask_channel_selection': static, 'mask_channel_other': 0.0, 'no_mask_channel_overlap': False, 'mask_channel_min_space': 1, 'conv_pos': 128, 'conv_pos_groups': 16, 'latent_temp': [2.0, 0.5, 0.999995], 'skip_masked': False, 'skip_nomask': True, 'checkpoint_activations': False, 'required_seq_len_multiple': 2, 'depthwise_conv_kernel_size': 31, 'attn_type': '', 'pos_enc_type': 'abs', 'fp16': False}
train_960h_hubert_large/log/log-train-2022-09-10-14-48-17-2 ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-09-10 14:48:17,948 INFO [train_hubert.py:997] (2/8) Training started
2
+ 2022-09-10 14:48:17,948 INFO [train_hubert.py:1007] (2/8) Device: cuda:2
3
+ 2022-09-10 14:48:17,951 INFO [train_hubert.py:1016] (2/8) {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'encoder_dim': 1280, 'nhead': 8, 'dim_feedforward': 2048, 'num_encoder_layers': 12, 'decoder_dim': 512, 'joiner_dim': 512, 'model_warm_step': 3000, 'env_info': {'k2-version': '1.18', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '2d82a1d9350263ae48a6953034ce570e3d5208c1', 'k2-git-date': 'Mon Aug 15 02:09:05 2022', 'lhotse-version': '1.5.0', 'torch-version': '1.10.1', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.9', 'icefall-git-branch': 'finetune_hubert', 'icefall-git-sha1': 'bf6c560-dirty', 'icefall-git-date': 'Thu Aug 25 11:33:43 2022', 'icefall-path': '/ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked', 'k2-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_test/lib/python3.9/site-packages/k2/__init__.py', 'lhotse-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_test/lib/python3.9/site-packages/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-6-0701202559-8476c48f5f-xmr4s', 'IP address': '10.177.28.74'}, 'world_size': 8, 'master_port': 12193, 'tensorboard': True, 'num_epochs': 30, 'start_epoch': 1, 'start_batch': 0, 'exp_dir': PosixPath('finetune_hubert_transducer/exp_960h_TSA_freeze10k_normalized_total32k_with_musan'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'initial_lr': 0.003, 'lr_batches': 5000, 'lr_epochs': 6, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'codebook_loss_scale': 0.1, 'seed': 42, 'print_diagnostics': False, 'save_every_n': 8000, 'keep_last_k': 20, 'average_period': 100, 'use_fp16': False, 'enable_distillation': True, 'encoder_type': 'hubert', 'hubert_model_dir': '/ceph-data4/yangxiaoyu/pretrained_models/hubert_xtralarge_ll60k_finetune_ls960.pt', 'hubert_output_size': 1280, 'hubert_freeze_finetune_updates': 10000, 'hubert_mask_prob': 0.25, 'hubert_mask_channel_prob': 0.5, 'hubert_mask_channel_length': 64, 'hubert_subsample_output': True, 'hubert_subsample_mode': 'concat_tanh', 'use_tri_state_optim': True, 'TSA_init_lr': 5e-07, 'TSA_warmup_lr': 3e-05, 'TSA_end_lr': 1.5e-06, 'TSA_total_steps': 320000, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 60, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': False, 'spec_aug_time_warp_factor': -1, 'enable_musan': True, 'input_strategy': 'AudioSamples', 'blank_id': 0, 'vocab_size': 500}
4
+ 2022-09-10 14:48:17,952 INFO [train_hubert.py:1018] (2/8) About to create model
5
+ 2022-09-10 14:48:18,671 INFO [text_to_speech.py:34] (2/8) Please install tensorboardX: pip install tensorboardX
6
+ 2022-09-10 14:49:27,448 INFO [hubert_pretraining.py:116] (2/8) current directory is /ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked/egs/librispeech/ASR
7
+ 2022-09-10 14:49:27,448 INFO [hubert_pretraining.py:117] (2/8) HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': '/checkpoint/abdo/old_checkpoint02/datasets/librispeech/960h/raw_repeated', 'fine_tuning': False, 'labels': ['ltr'], 'label_dir': None, 'label_rate': -1.0, 'sample_rate': 16000, 'normalize': True, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 300000, 'min_sample_size': None, 'single_target': True, 'random_crop': False, 'pad_audio': False}
8
+ 2022-09-10 14:49:27,568 INFO [hubert_pretraining.py:116] (2/8) current directory is /ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked/egs/librispeech/ASR
9
+ 2022-09-10 14:49:27,568 INFO [hubert_pretraining.py:117] (2/8) HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': '/checkpoint/abdo/old_checkpoint02/datasets/librispeech/960h/raw_repeated', 'fine_tuning': False, 'labels': ['lyr9.km500'], 'label_dir': '/checkpoint/wnhsu/experiments/hubert/kmeans_20210121/km_dataset_librivox.model_iter_2.all', 'label_rate': 50.0, 'sample_rate': 16000, 'normalize': True, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 250000, 'min_sample_size': 32000, 'single_target': False, 'random_crop': True, 'pad_audio': False}
10
+ 2022-09-10 14:49:27,581 INFO [hubert.py:250] (2/8) HubertModel Config: {'_name': 'hubert', 'label_rate': 50.0, 'extractor_mode': layer_norm, 'encoder_layers': 48, 'encoder_embed_dim': 1280, 'encoder_ffn_embed_dim': 5120, 'encoder_attention_heads': 16, 'activation_fn': gelu, 'layer_type': transformer, 'dropout': 0.0, 'attention_dropout': 0.0, 'activation_dropout': 0.1, 'encoder_layerdrop': 0.1, 'dropout_input': 0.0, 'dropout_features': 0.0, 'final_dim': 1024, 'untie_final_proj': True, 'layer_norm_first': True, 'conv_feature_layers': '[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2', 'conv_bias': False, 'logit_temp': 0.1, 'target_glu': False, 'feature_grad_mult': 0.0, 'mask_length': 10, 'mask_prob': 0.5, 'mask_selection': static, 'mask_other': 0.0, 'no_mask_overlap': False, 'mask_min_space': 1, 'mask_channel_length': 64, 'mask_channel_prob': 0.25, 'mask_channel_selection': static, 'mask_channel_other': 0.0, 'no_mask_channel_overlap': False, 'mask_channel_min_space': 1, 'conv_pos': 128, 'conv_pos_groups': 16, 'latent_temp': [2.0, 0.5, 0.999995], 'skip_masked': False, 'skip_nomask': True, 'checkpoint_activations': False, 'required_seq_len_multiple': 2, 'depthwise_conv_kernel_size': 31, 'attn_type': '', 'pos_enc_type': 'abs', 'fp16': False}
train_960h_hubert_large/log/log-train-2022-09-10-14-48-17-3 ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-09-10 14:48:17,943 INFO [train_hubert.py:997] (3/8) Training started
2
+ 2022-09-10 14:48:17,944 INFO [train_hubert.py:1007] (3/8) Device: cuda:3
3
+ 2022-09-10 14:48:17,946 INFO [train_hubert.py:1016] (3/8) {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'encoder_dim': 1280, 'nhead': 8, 'dim_feedforward': 2048, 'num_encoder_layers': 12, 'decoder_dim': 512, 'joiner_dim': 512, 'model_warm_step': 3000, 'env_info': {'k2-version': '1.18', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '2d82a1d9350263ae48a6953034ce570e3d5208c1', 'k2-git-date': 'Mon Aug 15 02:09:05 2022', 'lhotse-version': '1.5.0', 'torch-version': '1.10.1', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.9', 'icefall-git-branch': 'finetune_hubert', 'icefall-git-sha1': 'bf6c560-dirty', 'icefall-git-date': 'Thu Aug 25 11:33:43 2022', 'icefall-path': '/ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked', 'k2-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_test/lib/python3.9/site-packages/k2/__init__.py', 'lhotse-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_test/lib/python3.9/site-packages/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-6-0701202559-8476c48f5f-xmr4s', 'IP address': '10.177.28.74'}, 'world_size': 8, 'master_port': 12193, 'tensorboard': True, 'num_epochs': 30, 'start_epoch': 1, 'start_batch': 0, 'exp_dir': PosixPath('finetune_hubert_transducer/exp_960h_TSA_freeze10k_normalized_total32k_with_musan'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'initial_lr': 0.003, 'lr_batches': 5000, 'lr_epochs': 6, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'codebook_loss_scale': 0.1, 'seed': 42, 'print_diagnostics': False, 'save_every_n': 8000, 'keep_last_k': 20, 'average_period': 100, 'use_fp16': False, 'enable_distillation': True, 'encoder_type': 'hubert', 'hubert_model_dir': '/ceph-data4/yangxiaoyu/pretrained_models/hubert_xtralarge_ll60k_finetune_ls960.pt', 'hubert_output_size': 1280, 'hubert_freeze_finetune_updates': 10000, 'hubert_mask_prob': 0.25, 'hubert_mask_channel_prob': 0.5, 'hubert_mask_channel_length': 64, 'hubert_subsample_output': True, 'hubert_subsample_mode': 'concat_tanh', 'use_tri_state_optim': True, 'TSA_init_lr': 5e-07, 'TSA_warmup_lr': 3e-05, 'TSA_end_lr': 1.5e-06, 'TSA_total_steps': 320000, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 60, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': False, 'spec_aug_time_warp_factor': -1, 'enable_musan': True, 'input_strategy': 'AudioSamples', 'blank_id': 0, 'vocab_size': 500}
4
+ 2022-09-10 14:48:17,947 INFO [train_hubert.py:1018] (3/8) About to create model
5
+ 2022-09-10 14:48:18,671 INFO [text_to_speech.py:34] (3/8) Please install tensorboardX: pip install tensorboardX
6
+ 2022-09-10 14:49:27,445 INFO [hubert_pretraining.py:116] (3/8) current directory is /ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked/egs/librispeech/ASR
7
+ 2022-09-10 14:49:27,446 INFO [hubert_pretraining.py:117] (3/8) HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': '/checkpoint/abdo/old_checkpoint02/datasets/librispeech/960h/raw_repeated', 'fine_tuning': False, 'labels': ['ltr'], 'label_dir': None, 'label_rate': -1.0, 'sample_rate': 16000, 'normalize': True, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 300000, 'min_sample_size': None, 'single_target': True, 'random_crop': False, 'pad_audio': False}
8
+ 2022-09-10 14:49:27,568 INFO [hubert_pretraining.py:116] (3/8) current directory is /ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked/egs/librispeech/ASR
9
+ 2022-09-10 14:49:27,569 INFO [hubert_pretraining.py:117] (3/8) HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': '/checkpoint/abdo/old_checkpoint02/datasets/librispeech/960h/raw_repeated', 'fine_tuning': False, 'labels': ['lyr9.km500'], 'label_dir': '/checkpoint/wnhsu/experiments/hubert/kmeans_20210121/km_dataset_librivox.model_iter_2.all', 'label_rate': 50.0, 'sample_rate': 16000, 'normalize': True, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 250000, 'min_sample_size': 32000, 'single_target': False, 'random_crop': True, 'pad_audio': False}
10
+ 2022-09-10 14:49:27,582 INFO [hubert.py:250] (3/8) HubertModel Config: {'_name': 'hubert', 'label_rate': 50.0, 'extractor_mode': layer_norm, 'encoder_layers': 48, 'encoder_embed_dim': 1280, 'encoder_ffn_embed_dim': 5120, 'encoder_attention_heads': 16, 'activation_fn': gelu, 'layer_type': transformer, 'dropout': 0.0, 'attention_dropout': 0.0, 'activation_dropout': 0.1, 'encoder_layerdrop': 0.1, 'dropout_input': 0.0, 'dropout_features': 0.0, 'final_dim': 1024, 'untie_final_proj': True, 'layer_norm_first': True, 'conv_feature_layers': '[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2', 'conv_bias': False, 'logit_temp': 0.1, 'target_glu': False, 'feature_grad_mult': 0.0, 'mask_length': 10, 'mask_prob': 0.5, 'mask_selection': static, 'mask_other': 0.0, 'no_mask_overlap': False, 'mask_min_space': 1, 'mask_channel_length': 64, 'mask_channel_prob': 0.25, 'mask_channel_selection': static, 'mask_channel_other': 0.0, 'no_mask_channel_overlap': False, 'mask_channel_min_space': 1, 'conv_pos': 128, 'conv_pos_groups': 16, 'latent_temp': [2.0, 0.5, 0.999995], 'skip_masked': False, 'skip_nomask': True, 'checkpoint_activations': False, 'required_seq_len_multiple': 2, 'depthwise_conv_kernel_size': 31, 'attn_type': '', 'pos_enc_type': 'abs', 'fp16': False}
train_960h_hubert_large/log/log-train-2022-09-10-14-48-17-4 ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-09-10 14:48:17,946 INFO [train_hubert.py:997] (4/8) Training started
2
+ 2022-09-10 14:48:17,947 INFO [train_hubert.py:1007] (4/8) Device: cuda:4
3
+ 2022-09-10 14:48:17,950 INFO [train_hubert.py:1016] (4/8) {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'encoder_dim': 1280, 'nhead': 8, 'dim_feedforward': 2048, 'num_encoder_layers': 12, 'decoder_dim': 512, 'joiner_dim': 512, 'model_warm_step': 3000, 'env_info': {'k2-version': '1.18', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '2d82a1d9350263ae48a6953034ce570e3d5208c1', 'k2-git-date': 'Mon Aug 15 02:09:05 2022', 'lhotse-version': '1.5.0', 'torch-version': '1.10.1', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.9', 'icefall-git-branch': 'finetune_hubert', 'icefall-git-sha1': 'bf6c560-dirty', 'icefall-git-date': 'Thu Aug 25 11:33:43 2022', 'icefall-path': '/ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked', 'k2-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_test/lib/python3.9/site-packages/k2/__init__.py', 'lhotse-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_test/lib/python3.9/site-packages/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-6-0701202559-8476c48f5f-xmr4s', 'IP address': '10.177.28.74'}, 'world_size': 8, 'master_port': 12193, 'tensorboard': True, 'num_epochs': 30, 'start_epoch': 1, 'start_batch': 0, 'exp_dir': PosixPath('finetune_hubert_transducer/exp_960h_TSA_freeze10k_normalized_total32k_with_musan'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'initial_lr': 0.003, 'lr_batches': 5000, 'lr_epochs': 6, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'codebook_loss_scale': 0.1, 'seed': 42, 'print_diagnostics': False, 'save_every_n': 8000, 'keep_last_k': 20, 'average_period': 100, 'use_fp16': False, 'enable_distillation': True, 'encoder_type': 'hubert', 'hubert_model_dir': '/ceph-data4/yangxiaoyu/pretrained_models/hubert_xtralarge_ll60k_finetune_ls960.pt', 'hubert_output_size': 1280, 'hubert_freeze_finetune_updates': 10000, 'hubert_mask_prob': 0.25, 'hubert_mask_channel_prob': 0.5, 'hubert_mask_channel_length': 64, 'hubert_subsample_output': True, 'hubert_subsample_mode': 'concat_tanh', 'use_tri_state_optim': True, 'TSA_init_lr': 5e-07, 'TSA_warmup_lr': 3e-05, 'TSA_end_lr': 1.5e-06, 'TSA_total_steps': 320000, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 60, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': False, 'spec_aug_time_warp_factor': -1, 'enable_musan': True, 'input_strategy': 'AudioSamples', 'blank_id': 0, 'vocab_size': 500}
4
+ 2022-09-10 14:48:17,950 INFO [train_hubert.py:1018] (4/8) About to create model
5
+ 2022-09-10 14:48:18,671 INFO [text_to_speech.py:34] (4/8) Please install tensorboardX: pip install tensorboardX
6
+ 2022-09-10 14:49:27,471 INFO [hubert_pretraining.py:116] (4/8) current directory is /ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked/egs/librispeech/ASR
7
+ 2022-09-10 14:49:27,471 INFO [hubert_pretraining.py:117] (4/8) HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': '/checkpoint/abdo/old_checkpoint02/datasets/librispeech/960h/raw_repeated', 'fine_tuning': False, 'labels': ['ltr'], 'label_dir': None, 'label_rate': -1.0, 'sample_rate': 16000, 'normalize': True, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 300000, 'min_sample_size': None, 'single_target': True, 'random_crop': False, 'pad_audio': False}
8
+ 2022-09-10 14:49:27,592 INFO [hubert_pretraining.py:116] (4/8) current directory is /ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked/egs/librispeech/ASR
9
+ 2022-09-10 14:49:27,592 INFO [hubert_pretraining.py:117] (4/8) HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': '/checkpoint/abdo/old_checkpoint02/datasets/librispeech/960h/raw_repeated', 'fine_tuning': False, 'labels': ['lyr9.km500'], 'label_dir': '/checkpoint/wnhsu/experiments/hubert/kmeans_20210121/km_dataset_librivox.model_iter_2.all', 'label_rate': 50.0, 'sample_rate': 16000, 'normalize': True, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 250000, 'min_sample_size': 32000, 'single_target': False, 'random_crop': True, 'pad_audio': False}
10
+ 2022-09-10 14:49:27,608 INFO [hubert.py:250] (4/8) HubertModel Config: {'_name': 'hubert', 'label_rate': 50.0, 'extractor_mode': layer_norm, 'encoder_layers': 48, 'encoder_embed_dim': 1280, 'encoder_ffn_embed_dim': 5120, 'encoder_attention_heads': 16, 'activation_fn': gelu, 'layer_type': transformer, 'dropout': 0.0, 'attention_dropout': 0.0, 'activation_dropout': 0.1, 'encoder_layerdrop': 0.1, 'dropout_input': 0.0, 'dropout_features': 0.0, 'final_dim': 1024, 'untie_final_proj': True, 'layer_norm_first': True, 'conv_feature_layers': '[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2', 'conv_bias': False, 'logit_temp': 0.1, 'target_glu': False, 'feature_grad_mult': 0.0, 'mask_length': 10, 'mask_prob': 0.5, 'mask_selection': static, 'mask_other': 0.0, 'no_mask_overlap': False, 'mask_min_space': 1, 'mask_channel_length': 64, 'mask_channel_prob': 0.25, 'mask_channel_selection': static, 'mask_channel_other': 0.0, 'no_mask_channel_overlap': False, 'mask_channel_min_space': 1, 'conv_pos': 128, 'conv_pos_groups': 16, 'latent_temp': [2.0, 0.5, 0.999995], 'skip_masked': False, 'skip_nomask': True, 'checkpoint_activations': False, 'required_seq_len_multiple': 2, 'depthwise_conv_kernel_size': 31, 'attn_type': '', 'pos_enc_type': 'abs', 'fp16': False}
train_960h_hubert_large/log/log-train-2022-09-10-14-48-17-5 ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-09-10 14:48:17,946 INFO [train_hubert.py:997] (5/8) Training started
2
+ 2022-09-10 14:48:17,947 INFO [train_hubert.py:1007] (5/8) Device: cuda:5
3
+ 2022-09-10 14:48:17,950 INFO [train_hubert.py:1016] (5/8) {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'encoder_dim': 1280, 'nhead': 8, 'dim_feedforward': 2048, 'num_encoder_layers': 12, 'decoder_dim': 512, 'joiner_dim': 512, 'model_warm_step': 3000, 'env_info': {'k2-version': '1.18', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '2d82a1d9350263ae48a6953034ce570e3d5208c1', 'k2-git-date': 'Mon Aug 15 02:09:05 2022', 'lhotse-version': '1.5.0', 'torch-version': '1.10.1', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.9', 'icefall-git-branch': 'finetune_hubert', 'icefall-git-sha1': 'bf6c560-dirty', 'icefall-git-date': 'Thu Aug 25 11:33:43 2022', 'icefall-path': '/ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked', 'k2-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_test/lib/python3.9/site-packages/k2/__init__.py', 'lhotse-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_test/lib/python3.9/site-packages/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-6-0701202559-8476c48f5f-xmr4s', 'IP address': '10.177.28.74'}, 'world_size': 8, 'master_port': 12193, 'tensorboard': True, 'num_epochs': 30, 'start_epoch': 1, 'start_batch': 0, 'exp_dir': PosixPath('finetune_hubert_transducer/exp_960h_TSA_freeze10k_normalized_total32k_with_musan'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'initial_lr': 0.003, 'lr_batches': 5000, 'lr_epochs': 6, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'codebook_loss_scale': 0.1, 'seed': 42, 'print_diagnostics': False, 'save_every_n': 8000, 'keep_last_k': 20, 'average_period': 100, 'use_fp16': False, 'enable_distillation': True, 'encoder_type': 'hubert', 'hubert_model_dir': '/ceph-data4/yangxiaoyu/pretrained_models/hubert_xtralarge_ll60k_finetune_ls960.pt', 'hubert_output_size': 1280, 'hubert_freeze_finetune_updates': 10000, 'hubert_mask_prob': 0.25, 'hubert_mask_channel_prob': 0.5, 'hubert_mask_channel_length': 64, 'hubert_subsample_output': True, 'hubert_subsample_mode': 'concat_tanh', 'use_tri_state_optim': True, 'TSA_init_lr': 5e-07, 'TSA_warmup_lr': 3e-05, 'TSA_end_lr': 1.5e-06, 'TSA_total_steps': 320000, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 60, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': False, 'spec_aug_time_warp_factor': -1, 'enable_musan': True, 'input_strategy': 'AudioSamples', 'blank_id': 0, 'vocab_size': 500}
4
+ 2022-09-10 14:48:17,950 INFO [train_hubert.py:1018] (5/8) About to create model
5
+ 2022-09-10 14:48:18,673 INFO [text_to_speech.py:34] (5/8) Please install tensorboardX: pip install tensorboardX
6
+ 2022-09-10 14:49:27,446 INFO [hubert_pretraining.py:116] (5/8) current directory is /ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked/egs/librispeech/ASR
7
+ 2022-09-10 14:49:27,447 INFO [hubert_pretraining.py:117] (5/8) HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': '/checkpoint/abdo/old_checkpoint02/datasets/librispeech/960h/raw_repeated', 'fine_tuning': False, 'labels': ['ltr'], 'label_dir': None, 'label_rate': -1.0, 'sample_rate': 16000, 'normalize': True, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 300000, 'min_sample_size': None, 'single_target': True, 'random_crop': False, 'pad_audio': False}
8
+ 2022-09-10 14:49:27,573 INFO [hubert_pretraining.py:116] (5/8) current directory is /ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked/egs/librispeech/ASR
9
+ 2022-09-10 14:49:27,573 INFO [hubert_pretraining.py:117] (5/8) HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': '/checkpoint/abdo/old_checkpoint02/datasets/librispeech/960h/raw_repeated', 'fine_tuning': False, 'labels': ['lyr9.km500'], 'label_dir': '/checkpoint/wnhsu/experiments/hubert/kmeans_20210121/km_dataset_librivox.model_iter_2.all', 'label_rate': 50.0, 'sample_rate': 16000, 'normalize': True, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 250000, 'min_sample_size': 32000, 'single_target': False, 'random_crop': True, 'pad_audio': False}
10
+ 2022-09-10 14:49:27,586 INFO [hubert.py:250] (5/8) HubertModel Config: {'_name': 'hubert', 'label_rate': 50.0, 'extractor_mode': layer_norm, 'encoder_layers': 48, 'encoder_embed_dim': 1280, 'encoder_ffn_embed_dim': 5120, 'encoder_attention_heads': 16, 'activation_fn': gelu, 'layer_type': transformer, 'dropout': 0.0, 'attention_dropout': 0.0, 'activation_dropout': 0.1, 'encoder_layerdrop': 0.1, 'dropout_input': 0.0, 'dropout_features': 0.0, 'final_dim': 1024, 'untie_final_proj': True, 'layer_norm_first': True, 'conv_feature_layers': '[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2', 'conv_bias': False, 'logit_temp': 0.1, 'target_glu': False, 'feature_grad_mult': 0.0, 'mask_length': 10, 'mask_prob': 0.5, 'mask_selection': static, 'mask_other': 0.0, 'no_mask_overlap': False, 'mask_min_space': 1, 'mask_channel_length': 64, 'mask_channel_prob': 0.25, 'mask_channel_selection': static, 'mask_channel_other': 0.0, 'no_mask_channel_overlap': False, 'mask_channel_min_space': 1, 'conv_pos': 128, 'conv_pos_groups': 16, 'latent_temp': [2.0, 0.5, 0.999995], 'skip_masked': False, 'skip_nomask': True, 'checkpoint_activations': False, 'required_seq_len_multiple': 2, 'depthwise_conv_kernel_size': 31, 'attn_type': '', 'pos_enc_type': 'abs', 'fp16': False}
train_960h_hubert_large/log/log-train-2022-09-10-14-48-17-6 ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-09-10 14:48:17,947 INFO [train_hubert.py:997] (6/8) Training started
2
+ 2022-09-10 14:48:17,947 INFO [train_hubert.py:1007] (6/8) Device: cuda:6
3
+ 2022-09-10 14:48:17,950 INFO [train_hubert.py:1016] (6/8) {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'encoder_dim': 1280, 'nhead': 8, 'dim_feedforward': 2048, 'num_encoder_layers': 12, 'decoder_dim': 512, 'joiner_dim': 512, 'model_warm_step': 3000, 'env_info': {'k2-version': '1.18', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '2d82a1d9350263ae48a6953034ce570e3d5208c1', 'k2-git-date': 'Mon Aug 15 02:09:05 2022', 'lhotse-version': '1.5.0', 'torch-version': '1.10.1', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.9', 'icefall-git-branch': 'finetune_hubert', 'icefall-git-sha1': 'bf6c560-dirty', 'icefall-git-date': 'Thu Aug 25 11:33:43 2022', 'icefall-path': '/ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked', 'k2-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_test/lib/python3.9/site-packages/k2/__init__.py', 'lhotse-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_test/lib/python3.9/site-packages/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-6-0701202559-8476c48f5f-xmr4s', 'IP address': '10.177.28.74'}, 'world_size': 8, 'master_port': 12193, 'tensorboard': True, 'num_epochs': 30, 'start_epoch': 1, 'start_batch': 0, 'exp_dir': PosixPath('finetune_hubert_transducer/exp_960h_TSA_freeze10k_normalized_total32k_with_musan'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'initial_lr': 0.003, 'lr_batches': 5000, 'lr_epochs': 6, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'codebook_loss_scale': 0.1, 'seed': 42, 'print_diagnostics': False, 'save_every_n': 8000, 'keep_last_k': 20, 'average_period': 100, 'use_fp16': False, 'enable_distillation': True, 'encoder_type': 'hubert', 'hubert_model_dir': '/ceph-data4/yangxiaoyu/pretrained_models/hubert_xtralarge_ll60k_finetune_ls960.pt', 'hubert_output_size': 1280, 'hubert_freeze_finetune_updates': 10000, 'hubert_mask_prob': 0.25, 'hubert_mask_channel_prob': 0.5, 'hubert_mask_channel_length': 64, 'hubert_subsample_output': True, 'hubert_subsample_mode': 'concat_tanh', 'use_tri_state_optim': True, 'TSA_init_lr': 5e-07, 'TSA_warmup_lr': 3e-05, 'TSA_end_lr': 1.5e-06, 'TSA_total_steps': 320000, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 60, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': False, 'spec_aug_time_warp_factor': -1, 'enable_musan': True, 'input_strategy': 'AudioSamples', 'blank_id': 0, 'vocab_size': 500}
4
+ 2022-09-10 14:48:17,951 INFO [train_hubert.py:1018] (6/8) About to create model
5
+ 2022-09-10 14:48:18,673 INFO [text_to_speech.py:34] (6/8) Please install tensorboardX: pip install tensorboardX
6
+ 2022-09-10 14:49:27,449 INFO [hubert_pretraining.py:116] (6/8) current directory is /ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked/egs/librispeech/ASR
7
+ 2022-09-10 14:49:27,449 INFO [hubert_pretraining.py:117] (6/8) HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': '/checkpoint/abdo/old_checkpoint02/datasets/librispeech/960h/raw_repeated', 'fine_tuning': False, 'labels': ['ltr'], 'label_dir': None, 'label_rate': -1.0, 'sample_rate': 16000, 'normalize': True, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 300000, 'min_sample_size': None, 'single_target': True, 'random_crop': False, 'pad_audio': False}
8
+ 2022-09-10 14:49:27,571 INFO [hubert_pretraining.py:116] (6/8) current directory is /ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked/egs/librispeech/ASR
9
+ 2022-09-10 14:49:27,572 INFO [hubert_pretraining.py:117] (6/8) HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': '/checkpoint/abdo/old_checkpoint02/datasets/librispeech/960h/raw_repeated', 'fine_tuning': False, 'labels': ['lyr9.km500'], 'label_dir': '/checkpoint/wnhsu/experiments/hubert/kmeans_20210121/km_dataset_librivox.model_iter_2.all', 'label_rate': 50.0, 'sample_rate': 16000, 'normalize': True, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 250000, 'min_sample_size': 32000, 'single_target': False, 'random_crop': True, 'pad_audio': False}
10
+ 2022-09-10 14:49:27,584 INFO [hubert.py:250] (6/8) HubertModel Config: {'_name': 'hubert', 'label_rate': 50.0, 'extractor_mode': layer_norm, 'encoder_layers': 48, 'encoder_embed_dim': 1280, 'encoder_ffn_embed_dim': 5120, 'encoder_attention_heads': 16, 'activation_fn': gelu, 'layer_type': transformer, 'dropout': 0.0, 'attention_dropout': 0.0, 'activation_dropout': 0.1, 'encoder_layerdrop': 0.1, 'dropout_input': 0.0, 'dropout_features': 0.0, 'final_dim': 1024, 'untie_final_proj': True, 'layer_norm_first': True, 'conv_feature_layers': '[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2', 'conv_bias': False, 'logit_temp': 0.1, 'target_glu': False, 'feature_grad_mult': 0.0, 'mask_length': 10, 'mask_prob': 0.5, 'mask_selection': static, 'mask_other': 0.0, 'no_mask_overlap': False, 'mask_min_space': 1, 'mask_channel_length': 64, 'mask_channel_prob': 0.25, 'mask_channel_selection': static, 'mask_channel_other': 0.0, 'no_mask_channel_overlap': False, 'mask_channel_min_space': 1, 'conv_pos': 128, 'conv_pos_groups': 16, 'latent_temp': [2.0, 0.5, 0.999995], 'skip_masked': False, 'skip_nomask': True, 'checkpoint_activations': False, 'required_seq_len_multiple': 2, 'depthwise_conv_kernel_size': 31, 'attn_type': '', 'pos_enc_type': 'abs', 'fp16': False}
train_960h_hubert_large/log/log-train-2022-09-10-14-48-17-7 ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-09-10 14:48:17,947 INFO [train_hubert.py:997] (7/8) Training started
2
+ 2022-09-10 14:48:17,947 INFO [train_hubert.py:1007] (7/8) Device: cuda:7
3
+ 2022-09-10 14:48:17,950 INFO [train_hubert.py:1016] (7/8) {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'encoder_dim': 1280, 'nhead': 8, 'dim_feedforward': 2048, 'num_encoder_layers': 12, 'decoder_dim': 512, 'joiner_dim': 512, 'model_warm_step': 3000, 'env_info': {'k2-version': '1.18', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '2d82a1d9350263ae48a6953034ce570e3d5208c1', 'k2-git-date': 'Mon Aug 15 02:09:05 2022', 'lhotse-version': '1.5.0', 'torch-version': '1.10.1', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.9', 'icefall-git-branch': 'finetune_hubert', 'icefall-git-sha1': 'bf6c560-dirty', 'icefall-git-date': 'Thu Aug 25 11:33:43 2022', 'icefall-path': '/ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked', 'k2-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_test/lib/python3.9/site-packages/k2/__init__.py', 'lhotse-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_test/lib/python3.9/site-packages/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-6-0701202559-8476c48f5f-xmr4s', 'IP address': '10.177.28.74'}, 'world_size': 8, 'master_port': 12193, 'tensorboard': True, 'num_epochs': 30, 'start_epoch': 1, 'start_batch': 0, 'exp_dir': PosixPath('finetune_hubert_transducer/exp_960h_TSA_freeze10k_normalized_total32k_with_musan'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'initial_lr': 0.003, 'lr_batches': 5000, 'lr_epochs': 6, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'codebook_loss_scale': 0.1, 'seed': 42, 'print_diagnostics': False, 'save_every_n': 8000, 'keep_last_k': 20, 'average_period': 100, 'use_fp16': False, 'enable_distillation': True, 'encoder_type': 'hubert', 'hubert_model_dir': '/ceph-data4/yangxiaoyu/pretrained_models/hubert_xtralarge_ll60k_finetune_ls960.pt', 'hubert_output_size': 1280, 'hubert_freeze_finetune_updates': 10000, 'hubert_mask_prob': 0.25, 'hubert_mask_channel_prob': 0.5, 'hubert_mask_channel_length': 64, 'hubert_subsample_output': True, 'hubert_subsample_mode': 'concat_tanh', 'use_tri_state_optim': True, 'TSA_init_lr': 5e-07, 'TSA_warmup_lr': 3e-05, 'TSA_end_lr': 1.5e-06, 'TSA_total_steps': 320000, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 60, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': False, 'spec_aug_time_warp_factor': -1, 'enable_musan': True, 'input_strategy': 'AudioSamples', 'blank_id': 0, 'vocab_size': 500}
4
+ 2022-09-10 14:48:17,951 INFO [train_hubert.py:1018] (7/8) About to create model
5
+ 2022-09-10 14:48:18,671 INFO [text_to_speech.py:34] (7/8) Please install tensorboardX: pip install tensorboardX
6
+ 2022-09-10 14:49:27,447 INFO [hubert_pretraining.py:116] (7/8) current directory is /ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked/egs/librispeech/ASR
7
+ 2022-09-10 14:49:27,448 INFO [hubert_pretraining.py:117] (7/8) HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': '/checkpoint/abdo/old_checkpoint02/datasets/librispeech/960h/raw_repeated', 'fine_tuning': False, 'labels': ['ltr'], 'label_dir': None, 'label_rate': -1.0, 'sample_rate': 16000, 'normalize': True, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 300000, 'min_sample_size': None, 'single_target': True, 'random_crop': False, 'pad_audio': False}
8
+ 2022-09-10 14:49:27,573 INFO [hubert_pretraining.py:116] (7/8) current directory is /ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_forked/egs/librispeech/ASR
9
+ 2022-09-10 14:49:27,573 INFO [hubert_pretraining.py:117] (7/8) HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': '/checkpoint/abdo/old_checkpoint02/datasets/librispeech/960h/raw_repeated', 'fine_tuning': False, 'labels': ['lyr9.km500'], 'label_dir': '/checkpoint/wnhsu/experiments/hubert/kmeans_20210121/km_dataset_librivox.model_iter_2.all', 'label_rate': 50.0, 'sample_rate': 16000, 'normalize': True, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 250000, 'min_sample_size': 32000, 'single_target': False, 'random_crop': True, 'pad_audio': False}
10
+ 2022-09-10 14:49:27,586 INFO [hubert.py:250] (7/8) HubertModel Config: {'_name': 'hubert', 'label_rate': 50.0, 'extractor_mode': layer_norm, 'encoder_layers': 48, 'encoder_embed_dim': 1280, 'encoder_ffn_embed_dim': 5120, 'encoder_attention_heads': 16, 'activation_fn': gelu, 'layer_type': transformer, 'dropout': 0.0, 'attention_dropout': 0.0, 'activation_dropout': 0.1, 'encoder_layerdrop': 0.1, 'dropout_input': 0.0, 'dropout_features': 0.0, 'final_dim': 1024, 'untie_final_proj': True, 'layer_norm_first': True, 'conv_feature_layers': '[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2', 'conv_bias': False, 'logit_temp': 0.1, 'target_glu': False, 'feature_grad_mult': 0.0, 'mask_length': 10, 'mask_prob': 0.5, 'mask_selection': static, 'mask_other': 0.0, 'no_mask_overlap': False, 'mask_min_space': 1, 'mask_channel_length': 64, 'mask_channel_prob': 0.25, 'mask_channel_selection': static, 'mask_channel_other': 0.0, 'no_mask_channel_overlap': False, 'mask_channel_min_space': 1, 'conv_pos': 128, 'conv_pos_groups': 16, 'latent_temp': [2.0, 0.5, 0.999995], 'skip_masked': False, 'skip_nomask': True, 'checkpoint_activations': False, 'required_seq_len_multiple': 2, 'depthwise_conv_kernel_size': 31, 'attn_type': '', 'pos_enc_type': 'abs', 'fp16': False}