diff --git "a/exp/asr_stats_raw_en_word/logdir/stats.1.log" "b/exp/asr_stats_raw_en_word/logdir/stats.1.log" new file mode 100644--- /dev/null +++ "b/exp/asr_stats_raw_en_word/logdir/stats.1.log" @@ -0,0 +1,1585 @@ +# python3 -m espnet2.bin.asr_train --collect_stats true --use_preprocessor true --bpemodel none --token_type word --token_list data/en_token_list/word/tokens.txt --non_linguistic_symbols none --cleaner none --g2p none --train_data_path_and_name_and_type dump/raw/train/wav.scp,speech,sound --train_data_path_and_name_and_type dump/raw/train/text,text,text --valid_data_path_and_name_and_type dump/raw/valid/wav.scp,speech,sound --valid_data_path_and_name_and_type dump/raw/valid/text,text,text --train_shape_file exp/asr_stats_raw_en_word/logdir/train.1.scp --valid_shape_file exp/asr_stats_raw_en_word/logdir/valid.1.scp --output_dir exp/asr_stats_raw_en_word/logdir/stats.1 --config conf/train_hubert.yaml --frontend_conf fs=16k +# Started at Sun Feb 27 18:08:48 EST 2022 +# +/ocean/projects/cis210027p/ganesank/karthik_new/espnet/tools/venv/bin/python3 /ocean/projects/cis210027p/ganesank/karthik_new/espnet/espnet2/bin/asr_train.py --collect_stats true --use_preprocessor true --bpemodel none --token_type word --token_list data/en_token_list/word/tokens.txt --non_linguistic_symbols none --cleaner none --g2p none --train_data_path_and_name_and_type dump/raw/train/wav.scp,speech,sound --train_data_path_and_name_and_type dump/raw/train/text,text,text --valid_data_path_and_name_and_type dump/raw/valid/wav.scp,speech,sound --valid_data_path_and_name_and_type dump/raw/valid/text,text,text --train_shape_file exp/asr_stats_raw_en_word/logdir/train.1.scp --valid_shape_file exp/asr_stats_raw_en_word/logdir/valid.1.scp --output_dir exp/asr_stats_raw_en_word/logdir/stats.1 --config conf/train_hubert.yaml --frontend_conf fs=16k +[br014] 2022-02-27 18:11:40,799 (asr:382) INFO: Vocabulary size: 613 +[br014] 2022-02-27 18:12:33,779 (filelock:274) INFO: Lock 140554560383872 acquired on ./hub/s3prl_cache/4a54d64fa42b41e39db994c958d8107d5785a100f38c6eba680b6a3cc79babb3.lock +[br014] 2022-02-27 18:12:33,780 (filelock:318) INFO: Lock 140554560383872 released on ./hub/s3prl_cache/4a54d64fa42b41e39db994c958d8107d5785a100f38c6eba680b6a3cc79babb3.lock +[br014] 2022-02-27 18:18:31,758 (hubert_pretraining:119) INFO: current directory is /ocean/projects/cis210027p/ganesank/karthik_new/espnet/egs2/dstc2/asr1 +[br014] 2022-02-27 18:18:31,854 (hubert_pretraining:120) INFO: HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': '/checkpoint/wnhsu/data/librivox', 'fine_tuning': False, 'labels': ['lyr9.km500'], 'label_dir': '/checkpoint/wnhsu/experiments/hubert/kmeans_20210121/km_dataset_librivox.model_iter_2.all', 'label_rate': 50, 'sample_rate': 16000, 'normalize': True, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 250000, 'min_sample_size': 32000, 'single_target': False, 'random_crop': True, 'pad_audio': False} +[br014] 2022-02-27 18:19:03,605 (hubert:229) INFO: HubertModel Config: {'_name': 'hubert', 'label_rate': 50, 'extractor_mode': layer_norm, 'encoder_layers': 24, 'encoder_embed_dim': 1024, 'encoder_ffn_embed_dim': 4096, 'encoder_attention_heads': 16, 'activation_fn': gelu, 'dropout': 0.0, 'attention_dropout': 0.0, 'activation_dropout': 0.0, 'encoder_layerdrop': 0.0, 'dropout_input': 0.0, 'dropout_features': 0.0, 'final_dim': 768, 'untie_final_proj': True, 'layer_norm_first': True, 'conv_feature_layers': '[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2', 'conv_bias': False, 'logit_temp': 0.1, 'target_glu': False, 'feature_grad_mult': 1.0, 'mask_length': 10, 'mask_prob': 0.8, 'mask_selection': static, 'mask_other': 0.0, 'no_mask_overlap': False, 'mask_min_space': 1, 'mask_channel_length': 10, 'mask_channel_prob': 0.0, 'mask_channel_selection': static, 'mask_channel_other': 0.0, 'no_mask_channel_overlap': False, 'mask_channel_min_space': 1, 'conv_pos': 128, 'conv_pos_groups': 16, 'latent_temp': [2.0, 0.5, 0.999995], 'skip_masked': False, 'skip_nomask': True} +[Featurizer] - Take a list of 25 features and weighted sum them. +[Featurizer] - The selected feature hidden_states's downsample rate is 320 +[br014] 2022-02-27 18:31:52,299 (conformer_encoder:131) WARNING: Using legacy_rel_pos and it will be deprecated in the future. +[br014] 2022-02-27 18:31:56,005 (conformer_encoder:231) WARNING: Using legacy_rel_selfattn and it will be deprecated in the future. +[br014] 2022-02-27 18:32:20,407 (abs_task:1108) INFO: Setting frontend.upstream.model.mask_emb.requires_grad = False +[br014] 2022-02-27 18:32:20,407 (abs_task:1108) INFO: Setting frontend.upstream.model.label_embs_concat.requires_grad = False +[br014] 2022-02-27 18:32:20,407 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.0.0.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,407 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.0.2.1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,407 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.0.2.1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,407 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.1.0.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,407 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.1.2.1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,407 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.1.2.1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,407 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.2.0.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,407 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.2.2.1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.2.2.1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.3.0.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.3.2.1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.3.2.1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.4.0.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.4.2.1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.4.2.1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.5.0.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.5.2.1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.5.2.1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.6.0.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.6.2.1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.feature_extractor.conv_layers.6.2.1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.post_extract_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.post_extract_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.pos_conv.0.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.pos_conv.0.weight_g.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.pos_conv.0.weight_v.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.0.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.0.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.0.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.0.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.0.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.0.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.0.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,408 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.0.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.0.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.0.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.0.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.0.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.0.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.0.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.0.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.0.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.1.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.1.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.1.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.1.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.1.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.1.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.1.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.1.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.1.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.1.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.1.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.1.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.1.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.1.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.1.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.1.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.2.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.2.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.2.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.2.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,409 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.2.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.2.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.2.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.2.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.2.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.2.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.2.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.2.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.2.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.2.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.2.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.2.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.3.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.3.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.3.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.3.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.3.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.3.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.3.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.3.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.3.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.3.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.3.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.3.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.3.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.3.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.3.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.3.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.4.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,410 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.4.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,411 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.4.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,411 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.4.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,411 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.4.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,411 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.4.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,411 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.4.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,411 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.4.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,411 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.4.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,411 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.4.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,411 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.4.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,411 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.4.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,411 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.4.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,411 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.4.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,411 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.4.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,411 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.4.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.5.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.5.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.5.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.5.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.5.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.5.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.5.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.5.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.5.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.5.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.5.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.5.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.5.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.5.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.5.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.5.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.6.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.6.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.6.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.6.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,499 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.6.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.6.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.6.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.6.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.6.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.6.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.6.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.6.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.6.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.6.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.6.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.6.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.7.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.7.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.7.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.7.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.7.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.7.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.7.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.7.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.7.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.7.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.7.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.7.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.7.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.7.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.7.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.7.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,500 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.8.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.8.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.8.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.8.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.8.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.8.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.8.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.8.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.8.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.8.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.8.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.8.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.8.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.8.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.8.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.8.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.9.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.9.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.9.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.9.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.9.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.9.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.9.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.9.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.9.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.9.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.9.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.9.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,501 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.9.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.9.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.9.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.9.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.10.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.10.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.10.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.10.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.10.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.10.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.10.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.10.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.10.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.10.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.10.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.10.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.10.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.10.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.10.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.10.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.11.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.11.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.11.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.11.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.11.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.11.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.11.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.11.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,502 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.11.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.11.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.11.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.11.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.11.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.11.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.11.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.11.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.12.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.12.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.12.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.12.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.12.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.12.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.12.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.12.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.12.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.12.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.12.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.12.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.12.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.12.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.12.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,503 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.12.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.13.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.13.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.13.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.13.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.13.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.13.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.13.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.13.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.13.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.13.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.13.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.13.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.13.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.13.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.13.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.13.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.14.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.14.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.14.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.14.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.14.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.14.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.14.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,504 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.14.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,505 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.14.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,505 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.14.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,505 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.14.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,505 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.14.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,505 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.14.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,505 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.14.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,505 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.14.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,505 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.14.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,505 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.15.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,505 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.15.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,505 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.15.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,505 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.15.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,505 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.15.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,505 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.15.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,505 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.15.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,599 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.15.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,599 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.15.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,599 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.15.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,599 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.15.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,599 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.15.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,599 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.15.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,599 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.15.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,599 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.15.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,600 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.15.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,600 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.16.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,600 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.16.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,600 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.16.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,600 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.16.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,600 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.16.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,600 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.16.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,600 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.16.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,600 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.16.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,600 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.16.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,600 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.16.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,600 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.16.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,600 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.16.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,600 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.16.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,600 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.16.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,600 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.16.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,600 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.16.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,600 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.17.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,600 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.17.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,600 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.17.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.17.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.17.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.17.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.17.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.17.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.17.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.17.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.17.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.17.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.17.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.17.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.17.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.17.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.18.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.18.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.18.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.18.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.18.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.18.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.18.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.18.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.18.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.18.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.18.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.18.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.18.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.18.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.18.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,601 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.18.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.19.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.19.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.19.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.19.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.19.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.19.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.19.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.19.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.19.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.19.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.19.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.19.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.19.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.19.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.19.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.19.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.20.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.20.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.20.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.20.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.20.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.20.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.20.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.20.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.20.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.20.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.20.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.20.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,602 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.20.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.20.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.20.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.20.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.21.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.21.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.21.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.21.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.21.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.21.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.21.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.21.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.21.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.21.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.21.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.21.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.21.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.21.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.21.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.21.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.22.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.22.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.22.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.22.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.22.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.22.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.22.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.22.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.22.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,603 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.22.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.22.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.22.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.22.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.22.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.22.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.22.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.23.self_attn.k_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.23.self_attn.k_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.23.self_attn.v_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.23.self_attn.v_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.23.self_attn.q_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.23.self_attn.q_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.23.self_attn.out_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.23.self_attn.out_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.23.self_attn_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.23.self_attn_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.23.fc1.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.23.fc1.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.23.fc2.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.23.fc2.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.23.final_layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layers.23.final_layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.encoder.layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,604 (abs_task:1108) INFO: Setting frontend.upstream.model.layer_norm.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,605 (abs_task:1108) INFO: Setting frontend.upstream.model.layer_norm.bias.requires_grad = False +[br014] 2022-02-27 18:32:20,605 (abs_task:1108) INFO: Setting frontend.upstream.model.final_proj.weight.requires_grad = False +[br014] 2022-02-27 18:32:20,605 (abs_task:1108) INFO: Setting frontend.upstream.model.final_proj.bias.requires_grad = False +[br014] 2022-02-27 18:32:22,200 (abs_task:1132) INFO: pytorch.version=1.8.1+cu102, cuda.available=False, cudnn.version=7605, cudnn.benchmark=False, cudnn.deterministic=True +[br014] 2022-02-27 18:32:22,306 (abs_task:1133) INFO: Model structure: +ESPnetASRModel( + (frontend): S3prlFrontend( + (upstream): UpstreamExpert( + (model): HubertModel( + (feature_extractor): ConvFeatureExtractionModel( + (conv_layers): ModuleList( + (0): Sequential( + (0): Conv1d(1, 512, kernel_size=(10,), stride=(5,), bias=False) + (1): Dropout(p=0.0, inplace=False) + (2): Sequential( + (0): TransposeLast() + (1): Fp32LayerNorm((512,), eps=1e-05, elementwise_affine=True) + (2): TransposeLast() + ) + (3): GELU() + ) + (1): Sequential( + (0): Conv1d(512, 512, kernel_size=(3,), stride=(2,), bias=False) + (1): Dropout(p=0.0, inplace=False) + (2): Sequential( + (0): TransposeLast() + (1): Fp32LayerNorm((512,), eps=1e-05, elementwise_affine=True) + (2): TransposeLast() + ) + (3): GELU() + ) + (2): Sequential( + (0): Conv1d(512, 512, kernel_size=(3,), stride=(2,), bias=False) + (1): Dropout(p=0.0, inplace=False) + (2): Sequential( + (0): TransposeLast() + (1): Fp32LayerNorm((512,), eps=1e-05, elementwise_affine=True) + (2): TransposeLast() + ) + (3): GELU() + ) + (3): Sequential( + (0): Conv1d(512, 512, kernel_size=(3,), stride=(2,), bias=False) + (1): Dropout(p=0.0, inplace=False) + (2): Sequential( + (0): TransposeLast() + (1): Fp32LayerNorm((512,), eps=1e-05, elementwise_affine=True) + (2): TransposeLast() + ) + (3): GELU() + ) + (4): Sequential( + (0): Conv1d(512, 512, kernel_size=(3,), stride=(2,), bias=False) + (1): Dropout(p=0.0, inplace=False) + (2): Sequential( + (0): TransposeLast() + (1): Fp32LayerNorm((512,), eps=1e-05, elementwise_affine=True) + (2): TransposeLast() + ) + (3): GELU() + ) + (5): Sequential( + (0): Conv1d(512, 512, kernel_size=(2,), stride=(2,), bias=False) + (1): Dropout(p=0.0, inplace=False) + (2): Sequential( + (0): TransposeLast() + (1): Fp32LayerNorm((512,), eps=1e-05, elementwise_affine=True) + (2): TransposeLast() + ) + (3): GELU() + ) + (6): Sequential( + (0): Conv1d(512, 512, kernel_size=(2,), stride=(2,), bias=False) + (1): Dropout(p=0.0, inplace=False) + (2): Sequential( + (0): TransposeLast() + (1): Fp32LayerNorm((512,), eps=1e-05, elementwise_affine=True) + (2): TransposeLast() + ) + (3): GELU() + ) + ) + ) + (post_extract_proj): Linear(in_features=512, out_features=1024, bias=True) + (dropout_input): Dropout(p=0.0, inplace=False) + (dropout_features): Dropout(p=0.0, inplace=False) + (encoder): TransformerEncoder( + (pos_conv): Sequential( + (0): Conv1d(1024, 1024, kernel_size=(128,), stride=(1,), padding=(64,), groups=16) + (1): SamePad() + (2): GELU() + ) + (layers): ModuleList( + (0): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (1): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (2): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (3): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (4): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (5): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (6): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (7): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (8): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (9): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (10): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (11): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (12): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (13): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (14): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (15): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (16): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (17): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (18): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (19): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (20): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (21): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (22): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (23): TransformerSentenceEncoderLayer( + (self_attn): MultiheadAttention( + (dropout_module): FairseqDropout() + (k_proj): Linear(in_features=1024, out_features=1024, bias=True) + (v_proj): Linear(in_features=1024, out_features=1024, bias=True) + (q_proj): Linear(in_features=1024, out_features=1024, bias=True) + (out_proj): Linear(in_features=1024, out_features=1024, bias=True) + ) + (dropout1): Dropout(p=0.0, inplace=False) + (dropout2): Dropout(p=0.0, inplace=False) + (dropout3): Dropout(p=0.0, inplace=False) + (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + (fc1): Linear(in_features=1024, out_features=4096, bias=True) + (fc2): Linear(in_features=4096, out_features=1024, bias=True) + (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + ) + (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) + ) + (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True) + (final_proj): Linear(in_features=1024, out_features=768, bias=True) + ) + ) + (featurizer): Featurizer() + ) + (specaug): SpecAug( + (time_warp): TimeWarp(window=5, mode=bicubic) + (freq_mask): MaskAlongAxis(mask_width_range=[0, 30], num_mask=2, axis=freq) + (time_mask): MaskAlongAxis(mask_width_range=[0, 40], num_mask=2, axis=time) + ) + (normalize): UtteranceMVN(norm_means=True, norm_vars=False) + (preencoder): LinearProjection( + (linear_out): Linear(in_features=1024, out_features=80, bias=True) + ) + (encoder): ConformerEncoder( + (embed): Conv2dSubsampling( + (conv): Sequential( + (0): Conv2d(1, 512, kernel_size=(3, 3), stride=(2, 2)) + (1): ReLU() + (2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2)) + (3): ReLU() + ) + (out): Sequential( + (0): Linear(in_features=9728, out_features=512, bias=True) + (1): LegacyRelPositionalEncoding( + (dropout): Dropout(p=0.1, inplace=False) + ) + ) + ) + (encoders): MultiSequential( + (0): EncoderLayer( + (self_attn): LegacyRelPositionMultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (linear_pos): Linear(in_features=512, out_features=512, bias=False) + ) + (feed_forward): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (feed_forward_macaron): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (conv_module): ConvolutionModule( + (pointwise_conv1): Conv1d(512, 1024, kernel_size=(1,), stride=(1,)) + (depthwise_conv): Conv1d(512, 512, kernel_size=(31,), stride=(1,), padding=(15,), groups=512) + (norm): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (pointwise_conv2): Conv1d(512, 512, kernel_size=(1,), stride=(1,)) + (activation): Swish() + ) + (norm_ff): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_mha): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_ff_macaron): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_conv): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_final): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (1): EncoderLayer( + (self_attn): LegacyRelPositionMultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (linear_pos): Linear(in_features=512, out_features=512, bias=False) + ) + (feed_forward): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (feed_forward_macaron): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (conv_module): ConvolutionModule( + (pointwise_conv1): Conv1d(512, 1024, kernel_size=(1,), stride=(1,)) + (depthwise_conv): Conv1d(512, 512, kernel_size=(31,), stride=(1,), padding=(15,), groups=512) + (norm): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (pointwise_conv2): Conv1d(512, 512, kernel_size=(1,), stride=(1,)) + (activation): Swish() + ) + (norm_ff): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_mha): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_ff_macaron): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_conv): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_final): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (2): EncoderLayer( + (self_attn): LegacyRelPositionMultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (linear_pos): Linear(in_features=512, out_features=512, bias=False) + ) + (feed_forward): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (feed_forward_macaron): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (conv_module): ConvolutionModule( + (pointwise_conv1): Conv1d(512, 1024, kernel_size=(1,), stride=(1,)) + (depthwise_conv): Conv1d(512, 512, kernel_size=(31,), stride=(1,), padding=(15,), groups=512) + (norm): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (pointwise_conv2): Conv1d(512, 512, kernel_size=(1,), stride=(1,)) + (activation): Swish() + ) + (norm_ff): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_mha): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_ff_macaron): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_conv): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_final): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (3): EncoderLayer( + (self_attn): LegacyRelPositionMultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (linear_pos): Linear(in_features=512, out_features=512, bias=False) + ) + (feed_forward): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (feed_forward_macaron): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (conv_module): ConvolutionModule( + (pointwise_conv1): Conv1d(512, 1024, kernel_size=(1,), stride=(1,)) + (depthwise_conv): Conv1d(512, 512, kernel_size=(31,), stride=(1,), padding=(15,), groups=512) + (norm): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (pointwise_conv2): Conv1d(512, 512, kernel_size=(1,), stride=(1,)) + (activation): Swish() + ) + (norm_ff): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_mha): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_ff_macaron): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_conv): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_final): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (4): EncoderLayer( + (self_attn): LegacyRelPositionMultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (linear_pos): Linear(in_features=512, out_features=512, bias=False) + ) + (feed_forward): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (feed_forward_macaron): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (conv_module): ConvolutionModule( + (pointwise_conv1): Conv1d(512, 1024, kernel_size=(1,), stride=(1,)) + (depthwise_conv): Conv1d(512, 512, kernel_size=(31,), stride=(1,), padding=(15,), groups=512) + (norm): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (pointwise_conv2): Conv1d(512, 512, kernel_size=(1,), stride=(1,)) + (activation): Swish() + ) + (norm_ff): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_mha): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_ff_macaron): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_conv): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_final): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (5): EncoderLayer( + (self_attn): LegacyRelPositionMultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (linear_pos): Linear(in_features=512, out_features=512, bias=False) + ) + (feed_forward): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (feed_forward_macaron): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (conv_module): ConvolutionModule( + (pointwise_conv1): Conv1d(512, 1024, kernel_size=(1,), stride=(1,)) + (depthwise_conv): Conv1d(512, 512, kernel_size=(31,), stride=(1,), padding=(15,), groups=512) + (norm): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (pointwise_conv2): Conv1d(512, 512, kernel_size=(1,), stride=(1,)) + (activation): Swish() + ) + (norm_ff): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_mha): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_ff_macaron): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_conv): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_final): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (6): EncoderLayer( + (self_attn): LegacyRelPositionMultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (linear_pos): Linear(in_features=512, out_features=512, bias=False) + ) + (feed_forward): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (feed_forward_macaron): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (conv_module): ConvolutionModule( + (pointwise_conv1): Conv1d(512, 1024, kernel_size=(1,), stride=(1,)) + (depthwise_conv): Conv1d(512, 512, kernel_size=(31,), stride=(1,), padding=(15,), groups=512) + (norm): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (pointwise_conv2): Conv1d(512, 512, kernel_size=(1,), stride=(1,)) + (activation): Swish() + ) + (norm_ff): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_mha): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_ff_macaron): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_conv): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_final): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (7): EncoderLayer( + (self_attn): LegacyRelPositionMultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (linear_pos): Linear(in_features=512, out_features=512, bias=False) + ) + (feed_forward): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (feed_forward_macaron): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (conv_module): ConvolutionModule( + (pointwise_conv1): Conv1d(512, 1024, kernel_size=(1,), stride=(1,)) + (depthwise_conv): Conv1d(512, 512, kernel_size=(31,), stride=(1,), padding=(15,), groups=512) + (norm): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (pointwise_conv2): Conv1d(512, 512, kernel_size=(1,), stride=(1,)) + (activation): Swish() + ) + (norm_ff): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_mha): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_ff_macaron): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_conv): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_final): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (8): EncoderLayer( + (self_attn): LegacyRelPositionMultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (linear_pos): Linear(in_features=512, out_features=512, bias=False) + ) + (feed_forward): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (feed_forward_macaron): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (conv_module): ConvolutionModule( + (pointwise_conv1): Conv1d(512, 1024, kernel_size=(1,), stride=(1,)) + (depthwise_conv): Conv1d(512, 512, kernel_size=(31,), stride=(1,), padding=(15,), groups=512) + (norm): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (pointwise_conv2): Conv1d(512, 512, kernel_size=(1,), stride=(1,)) + (activation): Swish() + ) + (norm_ff): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_mha): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_ff_macaron): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_conv): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_final): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (9): EncoderLayer( + (self_attn): LegacyRelPositionMultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (linear_pos): Linear(in_features=512, out_features=512, bias=False) + ) + (feed_forward): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (feed_forward_macaron): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (conv_module): ConvolutionModule( + (pointwise_conv1): Conv1d(512, 1024, kernel_size=(1,), stride=(1,)) + (depthwise_conv): Conv1d(512, 512, kernel_size=(31,), stride=(1,), padding=(15,), groups=512) + (norm): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (pointwise_conv2): Conv1d(512, 512, kernel_size=(1,), stride=(1,)) + (activation): Swish() + ) + (norm_ff): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_mha): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_ff_macaron): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_conv): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_final): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (10): EncoderLayer( + (self_attn): LegacyRelPositionMultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (linear_pos): Linear(in_features=512, out_features=512, bias=False) + ) + (feed_forward): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (feed_forward_macaron): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (conv_module): ConvolutionModule( + (pointwise_conv1): Conv1d(512, 1024, kernel_size=(1,), stride=(1,)) + (depthwise_conv): Conv1d(512, 512, kernel_size=(31,), stride=(1,), padding=(15,), groups=512) + (norm): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (pointwise_conv2): Conv1d(512, 512, kernel_size=(1,), stride=(1,)) + (activation): Swish() + ) + (norm_ff): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_mha): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_ff_macaron): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_conv): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_final): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (11): EncoderLayer( + (self_attn): LegacyRelPositionMultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (linear_pos): Linear(in_features=512, out_features=512, bias=False) + ) + (feed_forward): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (feed_forward_macaron): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): Swish() + ) + (conv_module): ConvolutionModule( + (pointwise_conv1): Conv1d(512, 1024, kernel_size=(1,), stride=(1,)) + (depthwise_conv): Conv1d(512, 512, kernel_size=(31,), stride=(1,), padding=(15,), groups=512) + (norm): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (pointwise_conv2): Conv1d(512, 512, kernel_size=(1,), stride=(1,)) + (activation): Swish() + ) + (norm_ff): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_mha): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_ff_macaron): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_conv): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm_final): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + ) + (after_norm): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + ) + (decoder): TransformerDecoder( + (embed): Sequential( + (0): Embedding(613, 512) + (1): PositionalEncoding( + (dropout): Dropout(p=0.1, inplace=False) + ) + ) + (after_norm): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (output_layer): Linear(in_features=512, out_features=613, bias=True) + (decoders): MultiSequential( + (0): DecoderLayer( + (self_attn): MultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (src_attn): MultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (feed_forward): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): ReLU() + ) + (norm1): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm2): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm3): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (1): DecoderLayer( + (self_attn): MultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (src_attn): MultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (feed_forward): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): ReLU() + ) + (norm1): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm2): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm3): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (2): DecoderLayer( + (self_attn): MultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (src_attn): MultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (feed_forward): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): ReLU() + ) + (norm1): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm2): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm3): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (3): DecoderLayer( + (self_attn): MultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (src_attn): MultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (feed_forward): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): ReLU() + ) + (norm1): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm2): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm3): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (4): DecoderLayer( + (self_attn): MultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (src_attn): MultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (feed_forward): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): ReLU() + ) + (norm1): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm2): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm3): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (5): DecoderLayer( + (self_attn): MultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (src_attn): MultiHeadedAttention( + (linear_q): Linear(in_features=512, out_features=512, bias=True) + (linear_k): Linear(in_features=512, out_features=512, bias=True) + (linear_v): Linear(in_features=512, out_features=512, bias=True) + (linear_out): Linear(in_features=512, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + (feed_forward): PositionwiseFeedForward( + (w_1): Linear(in_features=512, out_features=2048, bias=True) + (w_2): Linear(in_features=2048, out_features=512, bias=True) + (dropout): Dropout(p=0.1, inplace=False) + (activation): ReLU() + ) + (norm1): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm2): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (norm3): LayerNorm((512,), eps=1e-12, elementwise_affine=True) + (dropout): Dropout(p=0.1, inplace=False) + ) + ) + ) + (ctc): CTC( + (ctc_lo): Linear(in_features=512, out_features=613, bias=True) + (ctc_loss): CTCLoss() + ) + (criterion_att): LabelSmoothingLoss( + (criterion): KLDivLoss() + ) +) + +Model summary: + Class Name: ESPnetASRModel + Total Number of model parameters: 426.09 M + Number of trainable parameters: 109.48 M (25.7%) + Size: 437.93 MB + Type: torch.float32 +[br014] 2022-02-27 18:32:22,306 (abs_task:1136) INFO: Optimizer: +Adam ( +Parameter Group 0 + amsgrad: False + betas: (0.9, 0.999) + eps: 1e-08 + initial_lr: 0.0002 + lr: 8e-09 + weight_decay: 0 +) +[br014] 2022-02-27 18:32:22,306 (abs_task:1137) INFO: Scheduler: WarmupLR(warmup_steps=25000) +[br014] 2022-02-27 18:32:22,401 (abs_task:1146) INFO: Saving the configuration in exp/asr_stats_raw_en_word/logdir/stats.1/config.yaml +[br014] 2022-02-27 18:32:23,200 (abs_task:1157) INFO: Namespace(config='conf/train_hubert.yaml', print_config=False, log_level='INFO', dry_run=False, iterator_type='sequence', output_dir='exp/asr_stats_raw_en_word/logdir/stats.1', ngpu=0, seed=0, num_workers=1, num_att_plot=3, dist_backend='nccl', dist_init_method='env://', dist_world_size=None, dist_rank=None, local_rank=None, dist_master_addr=None, dist_master_port=None, dist_launcher=None, multiprocessing_distributed=False, unused_parameters=False, sharded_ddp=False, cudnn_enabled=True, cudnn_benchmark=False, cudnn_deterministic=True, collect_stats=True, write_collected_feats=False, max_epoch=50, patience=None, val_scheduler_criterion=('valid', 'loss'), early_stopping_criterion=('valid', 'loss', 'min'), best_model_criterion=[['valid', 'acc', 'max']], keep_nbest_models=10, grad_clip=5.0, grad_clip_type=2.0, grad_noise=False, accum_grad=1, no_forward_run=False, resume=False, train_dtype='float32', use_amp=False, log_interval=None, use_tensorboard=True, use_wandb=False, wandb_project=None, wandb_id=None, wandb_entity=None, wandb_name=None, wandb_model_log_interval=-1, detect_anomaly=False, pretrain_path=None, init_param=[], ignore_init_mismatch=False, freeze_param=['frontend.upstream'], num_iters_per_epoch=None, batch_size=20, valid_batch_size=None, batch_bins=1000000, valid_batch_bins=None, train_shape_file=['exp/asr_stats_raw_en_word/logdir/train.1.scp'], valid_shape_file=['exp/asr_stats_raw_en_word/logdir/valid.1.scp'], batch_type='folded', valid_batch_type=None, fold_length=[], sort_in_batch='descending', sort_batch='descending', multiple_iterator=False, chunk_length=500, chunk_shift_ratio=0.5, num_cache_chunks=1024, train_data_path_and_name_and_type=[('dump/raw/train/wav.scp', 'speech', 'sound'), ('dump/raw/train/text', 'text', 'text')], valid_data_path_and_name_and_type=[('dump/raw/valid/wav.scp', 'speech', 'sound'), ('dump/raw/valid/text', 'text', 'text')], allow_variable_data_keys=False, max_cache_size=0.0, max_cache_fd=32, valid_max_cache_size=None, optim='adam', optim_conf={'lr': 0.0002}, scheduler='warmuplr', scheduler_conf={'warmup_steps': 25000}, token_list=['', '', '', '', 'bye', 'the', 'food', 'you', 'thank', 'thankyou', 'good', 'request-phone', 'number', 'phone', 'request-addr', 'address', 'restaurant', 'of', 'i', 'what', 'is', 'a', 'in', 'town', 'reqalts', 'part', 'inform-this-dontcare', 'for', 'and', 'looking', 'im', 'whats', 'about', 'inform-pricerange-moderate', 'dont', 'that', 'care', 'affirm', 'cheap', 'inform-pricerange-cheap', 'south', 'inform-area-south', 'how', 'serves', 'have', 'moderately', 'yes', 'priced', 'expensive', 'north', 'any', 'inform-pricerange-expensive', 'can', 'request-postcode', 'anything', 'else', 'inform-area-north', 'code', 'post', 'it', 'price', 'west', 'inform-area-west', 'east', 'type', 'inform-area-east', 'range', 'there', 'request-food', 'okay', 'oriental', 'goodbye', 'european', 'request-pricerange', 'area', 'want', 'an', 'inform-food-indian', 'indian', 'matter', 'doesnt', 'uh', 'thai', 'serve', 'request-area', 'inform-food-thai', 'inform-food-asian', 'asian', 'chinese', 'like', 'inform-area-centre', 'find', 'inform-food-chinese', 'inform-food-italian', 'italian', 'negate', 'center', 'no', 'moderate', 'please', 'get', 'to', 'their', 'inform-food-european', 'serving', 'may', 'inform-area-dontcare', 'do', 'korean', 'spanish', 'inform-food-spanish', 'vietnamese', 'inform-food-vietnamese', 'inform-food-korean', 'id', 'could', 'american', 'british', 'inform-food-british', 'kind', 'need', 'inform-food-turkish', 'turkish', 'um', 'inform-food-portuguese', 'gastropub', 'portuguese', 'does', 'inform-food-gastropub', 'inform-food-french', 'french', 'would', 'inform-food-mediterranean', 'mediterranean', 'they', 'modern', 'hello', 'inform-food-modern', 'noise', 'inform-pricerange-dontcare', 'its', 'inform-food-international', 'international', 'me', 'should', 'inform-food-north', 'repeat', 'right', 'give', 'inform-food-seafood', 'inform-food-japanese', 'japanese', 'jamaican', 'inform-food-jamaican', 'inform-food-creative', 'creative', 'are', 'inform-food-mexican', 'mexican', 'telephone', 'another', 'one', 'hungarian', 'ah', 'something', 'inform-food-dontcare', 'inform-food-cantonese', 'cantonese', 'inform-food-cuban', 'cuban', 'inform-food-hungarian', 'hi', 'breath', 'sea', 'yea', 'am', 'inform-food-traditional', 'traditional', 'caribbean', 'restaurants', 'ack', 'inform-food-world', 'world', 'with', 'inform-food-caribbean', 'barbecue', 'inform-food-corsica', 'corsica', 'inform-food-lebanese', 'lebanese', 'be', 'inform-food-basque', 'postcode', 'inform-food-romanian', 'romanian', 'inform-food-greek', 'greek', 'inform-food-barbeque', 'inform-food-african', 'african', 'side', 'other', 'pan', 'inform-food-english', 'english', 'inform-food-danish', 'danish', 'venue', 'inform-food-malaysian', 'australian', 'inform-food-unusual', 'unusual', 'inform-food-moroccan', 'inform-food-kosher', 'kosher', 'thats', 'inform-food-scandinavian', 'inform-food-afghan', 'afghan', 'inform-food-polynesian', 'polynesian', 'bout', 'inform-food-german', 'german', 'not', 'inform-food-vegetarian', 'inform-food-persian', 'persian', 'scandinavian', 'basque', 'inform-food-belgian', 'malaysian', 'inform-food-australian', 'moroccan', 'christmas', 'inform-food-catalan', 'inform-food-canapes', 'vegetarian', 'on', 'swedish', 'inform-food-irish', 'irish', 'canapes', 'inform-food-christmas', 'catalan', 'inform-food-venetian', 'inform-food-swedish', 'where', 'inform-food-tuscan', 'tuscan', 'inform-food-eritrean', 'venetian', 'inform-food-steakhouse', 'fusion', 'unintelligible', 'inform-food-bistro', 'bistro', 'yeah', 'alright', 'inform-food-swiss', 'swiss', 'inform-food-singaporean', 'seafood', 'know', 'confirm-pricerange-expensive', 'confirm-pricerange-moderate', 'next', 'oh', 'inform-food-brazilian', 'brazilian', 'inform-food-scottish', 'scottish', 'inform-food-fusion', 'inform-food-russian', 'russian', 'singaporean', 'kay', 'fine', 'inform-food-welsh', 'welsh', 'over', 'belgium', 'belgian', 'great', 'addre', 'inform-food-crossover', 'cool', 'steakhouse', 'confirm-food-chinese', 'inform-food-austrian', 'austrian', 'inform-food-polish', 'polish', 'again', 'centre', 'then', 'ok', 'halal', 'steak', 'back', 'thanks', 'inform-food-indonesian', 'indonesian', 'correct', 'well', 'confirm-area-centre', 'confirm-area-north', 'inform-food-halal', 'see', 'welcome', 'house', 'postal', 'pri', 'more', 'anywhere', 'central', 'crossover', 'much', 'very', 'located', 'my', 'confirm-pricerange-cheap', 'restart', 'start', 'go', 'just', 'iam', 'confirm-food-thai', 'confirm-food-korean', 'city', 'as', 'wok', 'option', 'was', 'two', 'your', 'confirm-food-gastropub', 'time', 'chiquito', 'inform-name-prezzo', 'prezzo', 'fuck', 'prices', 'reqmore', 'bask', 'different', 'cambridge', 'turkiesh', 'show', 'chineese', 'confirm-area-east', 'rest', 'request-name', 'name', 'try', 'sorry', 'foo', 'ye', 'ser', 'sells', 'change', 'confirm-food-hungarian', 'eritrean', 'but', 'eartrain', 'options', 'location', 'served', 'cross', 'k', 'inform-name-chiquito', 'bar', 'tv_noise', 'confirm-food-canapes', 'day', 'parts', 'malyasian', 'airitran', 'so', 'new', 'at', 'confirm-food-indian', 'confirm-food-portuguese', 'place', 'tell', 'though', 'choice', 'awesome', 'stop', 'inform-food-australasian', 'portugese', 'missing', 'sock', 'deny-name-golden', 'golden', 'park', 'tur', 'vinci', 'pizzeria', 'endonesian', 'needs', 'deny-food-korean', 'confirm-area-west', 't', 'trying', 'dear', 'thatll', 'excellent', 'baskaye', 'confirm-food-basque', 'p', 'if', 'india', 'some', 'ran', 'moroccon', 'confirm-food-european', 'hut', 'all', 'airatarin', 'canope', 'tailand', 'vanessa', 'earatrain', 'shit', 'ts', 'confirm-food-steakhouse', 'cantonates', 'vegitarian', 'knocking', 'signaporian', 'mail', 'foods', 'got', 'us', 'lets', 'f', 'medium', 'un', 'downtown', 'portugeuse', 'venues', 'talking', 'nymber', 'every', 'this', 'moron', 'says', 'sucks', 'itailian', 'chinses', 'elses', 'request-signature', 'special', 'restaurnt', 'confirm-food-fusion', 'spensive', 'scandinavia', 'gastro', 'pub', 'anyone', 'deny-food-chinese', 'res', 'derately', 'down', 'fancy', 'wha', 'alternative', 'confirm-food-mediterranean', 'confirm-food-caribbean', 'first', 'least', 'bart', 'selection', 'finally', 'somewhere', 'ko', 'sounds', 'said', 'eat', 'huh', 'searching', 's', 'wrong', 'cute', 'ffood', 'earetree', 'earatree', 'confirm-food-modern', 'confirm-food-christmas', 'long', 'class', 'restauran', 'turk', 'deny-name-the', 'beside', 'yourself', 'hate', 'signaporean', 'restuarant', 'did', 'inform-name-da', 'da', 'only', 'int', 'inform-name-bloomsbury', 'bloomsbury', 'inaudible', 'scandanavian', 'done', 'confirm-food-indonesian', 'cancun', 'gasper', 'o', 'meant', 'plea', 'halo', 'inner', 'confirm-food-swedish', 'confirm-food-asian', 'wanna', 'catalanian', 'darling', 'canape', 'baskey', 'indians', 'bat', 'europ', 'now', 'canopy', 'restaraunt', 'medterranean', 'cant', 'deosnt', 'ostro', 'addrss', 'damn', 'deny-name-hk', 'hk', 'signapore', 'probably', 'ly', 'moderat', 'modereate', 'let', 'zip', 'spani', 'adddress', 'ori', 'euorpean', 'confirm-food-seafood', 'mistakes', 'ooh', 'confirm-food-spanish', 'worth', 'mediteranian', 'music', 'others', 'b', 'types', 'thing', 'fish', 'besides', 'confirm-food-halal', 'inform-name-pizza', 'pizza', 'ever', 'surprise', 'ones', 'train', 'arotrian', 'modertley', 'calling', 'minuet', 'york', 'sh', 'cost', 'confirm-area-south', 'bristish', 'confirm-food-british', 'loo', 'think', 'medetanian', 'wheres', 'his', 'confirm-food-turkish', 'inform-name-restaurant', 'euro', 'wondering', 'theres', 'afternoon', 'sure', 'might', 'umh', 'deny-food-vietnamese', 'art', 'rerestaurant', 'vietna', 'ne', 'take', 'modreately', 'air', 'tran', 'crosstalk', 'mind', 'ya', 'god', 'really', 'believe', 'confirm-food-italian', 'confirm-food-jamaican', 'preference', ''], init=None, input_size=None, ctc_conf={'dropout_rate': 0.0, 'ctc_type': 'builtin', 'reduce': True, 'ignore_nan_grad': True}, model_conf={'ctc_weight': 0.3, 'lsm_weight': 0.1, 'length_normalized_loss': False, 'extract_feats_in_collect_stats': False}, use_preprocessor=True, token_type='word', bpemodel=None, non_linguistic_symbols=None, cleaner=None, g2p=None, speech_volume_normalize=None, rir_scp=None, rir_apply_prob=1.0, noise_scp=None, noise_apply_prob=1.0, noise_db_range='13_15', frontend='s3prl', frontend_conf={'frontend_conf': {'upstream': 'hubert_large_ll60k'}, 'download_dir': './hub', 'multilayer_feature': True, 'fs': '16k'}, specaug='specaug', specaug_conf={'apply_time_warp': True, 'time_warp_window': 5, 'time_warp_mode': 'bicubic', 'apply_freq_mask': True, 'freq_mask_width_range': [0, 30], 'num_freq_mask': 2, 'apply_time_mask': True, 'time_mask_width_range': [0, 40], 'num_time_mask': 2}, normalize='utterance_mvn', normalize_conf={}, preencoder='linear', preencoder_conf={'input_size': 1024, 'output_size': 80}, encoder='conformer', encoder_conf={'output_size': 512, 'attention_heads': 8, 'linear_units': 2048, 'num_blocks': 12, 'dropout_rate': 0.1, 'positional_dropout_rate': 0.1, 'attention_dropout_rate': 0.1, 'input_layer': 'conv2d', 'normalize_before': True, 'macaron_style': True, 'pos_enc_layer_type': 'rel_pos', 'selfattention_layer_type': 'rel_selfattn', 'activation_type': 'swish', 'use_cnn_module': True, 'cnn_module_kernel': 31}, postencoder=None, postencoder_conf={}, decoder='transformer', decoder_conf={'attention_heads': 8, 'linear_units': 2048, 'num_blocks': 6, 'dropout_rate': 0.1, 'positional_dropout_rate': 0.1, 'self_attention_dropout_rate': 0.1, 'src_attention_dropout_rate': 0.1}, required=['output_dir', 'token_list'], version='0.10.3a3', distributed=False) +[s3prl.upstream.experts] Warning: can not import s3prl.upstream.byol_a.expert: No module named 'easydict'. Pass. +[s3prl.hub] Warning: can not import s3prl.upstream.byol_a.hubconf: No module named 'easydict'. Please see upstream/byol_a/README.md +[s3prl.downstream.experts] Warning: can not import s3prl.downstream.voxceleb2_ge2e.expert: No module named 'sox'. Pass. +[s3prl.downstream.experts] Warning: can not import s3prl.downstream.sws2013.expert: No module named 'lxml'. Pass. +[s3prl.downstream.experts] Warning: can not import s3prl.downstream.separation_stft.expert: No module named 'asteroid'. Pass. +[s3prl.downstream.experts] Warning: can not import s3prl.downstream.enhancement_stft.expert: No module named 'asteroid'. Pass. +[s3prl.downstream.experts] Warning: can not import s3prl.downstream.quesst14_embedding.expert: No module named 'lxml'. Pass. +[s3prl.downstream.experts] Warning: can not import s3prl.downstream.speech_commands.expert: No module named 'catalyst'. Pass. +[s3prl.downstream.experts] Warning: can not import s3prl.downstream.quesst14_dtw.expert: No module named 'dtw'. Pass. +[s3prl.downstream.experts] Warning: can not import s3prl.downstream.sv_voxceleb1.expert: No module named 'sox'. Pass. +Using cache found in ./hub/s3prl_cache/4a54d64fa42b41e39db994c958d8107d5785a100f38c6eba680b6a3cc79babb3 +for https://dl.fbaipublicfiles.com/hubert/hubert_large_ll60k.pt +[br014] 2022-02-27 18:32:45,199 (espnet_model:196) WARNING: Generating dummy stats for feats and feats_lengths, because encoder_conf.extract_feats_in_collect_stats is False +[br014] 2022-02-27 18:32:45,402 (espnet_model:196) WARNING: Generating dummy stats for feats and feats_lengths, because encoder_conf.extract_feats_in_collect_stats is False +[br014] 2022-02-27 18:32:47,305 (espnet_model:196) WARNING: Generating dummy stats for feats and feats_lengths, because encoder_conf.extract_feats_in_collect_stats is False +[br014] 2022-02-27 18:32:51,708 (espnet_model:196) WARNING: Generating dummy stats for feats and feats_lengths, because encoder_conf.extract_feats_in_collect_stats is False +[br014] 2022-02-27 18:32:54,504 (espnet_model:196) WARNING: Generating dummy stats for feats and feats_lengths, because encoder_conf.extract_feats_in_collect_stats is False +[br014] 2022-02-27 18:32:56,604 (espnet_model:196) WARNING: Generating dummy stats for feats and feats_lengths, because encoder_conf.extract_feats_in_collect_stats is False +[br014] 2022-02-27 18:33:02,001 (espnet_model:196) WARNING: Generating dummy stats for feats and feats_lengths, because encoder_conf.extract_feats_in_collect_stats is False +[br014] 2022-02-27 18:33:06,700 (espnet_model:196) WARNING: Generating dummy stats for feats and feats_lengths, because encoder_conf.extract_feats_in_collect_stats is False +[br014] 2022-02-27 18:33:08,501 (espnet_model:196) WARNING: Generating dummy stats for feats and feats_lengths, because encoder_conf.extract_feats_in_collect_stats is False +[br014] 2022-02-27 18:33:23,300 (espnet_model:196) WARNING: Generating dummy stats for feats and feats_lengths, because encoder_conf.extract_feats_in_collect_stats is False +[br014] 2022-02-27 18:33:24,311 (espnet_model:196) WARNING: Generating dummy stats for feats and feats_lengths, because encoder_conf.extract_feats_in_collect_stats is False +[br014] 2022-02-27 18:33:26,003 (espnet_model:196) WARNING: Generating dummy stats for feats and feats_lengths, because encoder_conf.extract_feats_in_collect_stats is False +# Accounting: time=1510 threads=1 +# Ended (code 0) at Sun Feb 27 18:33:59 EST 2022, elapsed time 1510 seconds