2023-08-17 15:22:10,425 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 1234, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'half_type': 'fp16', 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3, 'all_in_mem': False, 'vol_aug': True}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050, 'unit_interpolate_mode': 'nearest'}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'n_layers_trans_flow': 3, 'n_flow_layer': 4, 'use_spectral_norm': False, 'gin_channels': 768, 'ssl_dim': 768, 'n_speakers': 8, 'vocoder_name': 'nsf-hifigan', 'speech_encoder': 'vec768l12', 'speaker_embedding': False, 'vol_embedding': True, 'use_depthwise_conv': False, 'flow_share_parameter': False, 'use_automatic_f0_prediction': True, 'use_transformer_flow': False}, 'spk': {'aro_sweet': 0, 'aro_normal': 1, 'aro_light': 2, 'aro_whisper': 3, 'aro_tender': 4, 'aro_power': 5, 'aro_dark': 6, 'aro_adult': 7}, 'model_dir': './logs/44k'} 2023-08-17 15:22:23,720 44k INFO emb_g.weight is not in the checkpoint 2023-08-17 15:22:23,720 44k INFO emb_vol.weight is not in the checkpoint 2023-08-17 15:22:23,720 44k INFO emb_vol.bias is not in the checkpoint 2023-08-17 15:22:23,798 44k INFO Loaded checkpoint './logs/44k/G_0.pth' (iteration 0) 2023-08-17 15:22:25,036 44k INFO Loaded checkpoint './logs/44k/D_0.pth' (iteration 0) 2023-08-17 15:26:03,027 44k INFO Train Epoch: 1 [56%] 2023-08-17 15:26:03,028 44k INFO Losses: [2.5576305389404297, 2.2087337970733643, 17.643268585205078, 22.64542579650879, 1.5581012964248657], step: 200, lr: 0.0001, reference_loss: 46.613162994384766 2023-08-17 15:28:24,256 44k INFO ====> Epoch: 1, cost 373.84 s 2023-08-17 15:29:17,165 44k INFO Train Epoch: 2 [12%] 2023-08-17 15:29:17,166 44k INFO Losses: [2.1168274879455566, 2.9472110271453857, 12.475791931152344, 18.62888526916504, 1.3812187910079956], step: 400, lr: 9.99875e-05, reference_loss: 37.54993438720703 2023-08-17 15:31:55,170 44k INFO Train Epoch: 2 [68%] 2023-08-17 15:31:55,171 44k INFO Losses: [2.2963595390319824, 2.8945212364196777, 12.427124977111816, 20.10533905029297, 1.331030011177063], step: 600, lr: 9.99875e-05, reference_loss: 39.05437469482422 2023-08-17 15:33:26,375 44k INFO ====> Epoch: 2, cost 302.12 s 2023-08-17 15:34:50,731 44k INFO Train Epoch: 3 [24%] 2023-08-17 15:34:50,736 44k INFO Losses: [2.1560511589050293, 2.912745237350464, 12.682981491088867, 20.274463653564453, 0.9434249997138977], step: 800, lr: 9.99750015625e-05, reference_loss: 38.96966552734375 2023-08-17 15:35:39,004 44k INFO Saving model and optimizer state at iteration 3 to ./logs/44k/G_800.pth 2023-08-17 15:35:45,524 44k INFO Saving model and optimizer state at iteration 3 to ./logs/44k/D_800.pth 2023-08-17 15:38:32,528 44k INFO Train Epoch: 3 [81%] 2023-08-17 15:38:32,530 44k INFO Losses: [2.099100112915039, 3.036345958709717, 16.33808135986328, 19.81005859375, 1.01120924949646], step: 1000, lr: 9.99750015625e-05, reference_loss: 42.29479217529297 2023-08-17 15:39:29,118 44k INFO ====> Epoch: 3, cost 362.74 s 2023-08-17 15:41:25,913 44k INFO Train Epoch: 4 [37%] 2023-08-17 15:41:25,918 44k INFO Losses: [2.078648567199707, 3.0924131870269775, 13.713207244873047, 18.991626739501953, 1.1123517751693726], step: 1200, lr: 9.996250468730469e-05, reference_loss: 38.98824691772461 2023-08-17 15:44:03,305 44k INFO Train Epoch: 4 [93%] 2023-08-17 15:44:03,310 44k INFO Losses: [1.9055949449539185, 2.90034556388855, 18.006919860839844, 18.755510330200195, 1.0224320888519287], step: 1400, lr: 9.996250468730469e-05, reference_loss: 42.59080123901367 2023-08-17 15:44:25,629 44k INFO ====> Epoch: 4, cost 296.51 s 2023-08-17 15:46:56,674 44k INFO Train Epoch: 5 [49%] 2023-08-17 15:46:56,678 44k INFO Losses: [2.247441053390503, 2.5629279613494873, 16.587787628173828, 18.015050888061523, 1.033828616142273], step: 1600, lr: 9.995000937421877e-05, reference_loss: 40.44703674316406 2023-08-17 15:47:14,311 44k INFO Saving model and optimizer state at iteration 5 to ./logs/44k/G_1600.pth 2023-08-17 15:47:20,260 44k INFO Saving model and optimizer state at iteration 5 to ./logs/44k/D_1600.pth