sound / wandb /debug.log
Alyosha11's picture
Add files using upload-large-folder tool
3e961d9 verified
2024-12-04 06:07:02,640 INFO MainThread:47802 [wandb_setup.py:_flush():79] Current SDK version is 0.18.7
2024-12-04 06:07:02,640 INFO MainThread:47802 [wandb_setup.py:_flush():79] Configure stats pid to 47802
2024-12-04 06:07:02,641 INFO MainThread:47802 [wandb_setup.py:_flush():79] Loading settings from /root/.config/wandb/settings
2024-12-04 06:07:02,641 INFO MainThread:47802 [wandb_setup.py:_flush():79] Loading settings from /workspace/GPT-SoVITS/wandb/settings
2024-12-04 06:07:02,642 INFO MainThread:47802 [wandb_setup.py:_flush():79] Loading settings from environment variables: {}
2024-12-04 06:07:02,642 INFO MainThread:47802 [wandb_setup.py:_flush():79] Applying setup settings: {'mode': None, '_disable_service': None}
2024-12-04 06:07:02,643 INFO MainThread:47802 [wandb_setup.py:_flush():79] Inferring run settings from compute environment: {'program_relpath': 'GPT_SoVITS/s1_train.py', 'program_abspath': '/workspace/GPT-SoVITS/GPT_SoVITS/s1_train.py', 'program': '/workspace/GPT-SoVITS/GPT_SoVITS/s1_train.py'}
2024-12-04 06:07:02,643 INFO MainThread:47802 [wandb_setup.py:_flush():79] Applying login settings: {}
2024-12-04 06:07:02,644 INFO MainThread:47802 [wandb_init.py:_log_setup():533] Logging user logs to /workspace/GPT-SoVITS/wandb/run-20241204_060702-yfryieml/logs/debug.log
2024-12-04 06:07:02,645 INFO MainThread:47802 [wandb_init.py:_log_setup():534] Logging internal logs to /workspace/GPT-SoVITS/wandb/run-20241204_060702-yfryieml/logs/debug-internal.log
2024-12-04 06:07:02,645 INFO MainThread:47802 [wandb_init.py:init():619] calling init triggers
2024-12-04 06:07:02,646 INFO MainThread:47802 [wandb_init.py:init():626] wandb.init called with sweep_config: {}
config: {'output_dir': 'logs/s1', 'train': {'seed': 1234, 'epochs': 15, 'batch_size': 8, 'save_every_n_epoch': 5, 'precision': 32, 'if_save_latest': True, 'if_save_every_weights': True, 'exp_name': 'gpt_training', 'half_weights_save_dir': 'weights/s1', 'wandb': {'project': 'gpt-sovits-hindi', 'name': 'stage1_training', 'entity': None, 'log_interval': 100}}, 'optimizer': {'lr_init': 0.0001, 'lr': 0.0004, 'lr_end': 1e-05, 'warmup_steps': 4000, 'decay_steps': 50000}, 'data': {'training_files': 'data8', 'max_sec': 60, 'max_frames': 60, 'filter_length': 2048, 'hop_length': 640, 'win_length': 2048, 'mel_channels': 128, 'mel_fmin': 0.0, 'mel_fmax': None, 'cleaned_text': True, 'num_workers': 4, 'batch_size': 8, 'pad_val': 1024}, 'train_semantic_path': 'data8/semantic.tsv', 'train_phoneme_path': 'data8/phoneme.txt', 'model': {'hidden_dim': 768, 'embedding_dim': 768, 'n_layer': 12, 'head': 12, 'n_embd': 768, 'vocab_size': 2048, 'block_size': 1000, 'embd_pdrop': 0.1, 'resid_pdrop': 0.1, 'attn_pdrop': 0.1, 'semantic_dim': 1024, 'num_layers': 6, 'ffn_hidden': 3072, 'dropout': 0.1, 'attention_dropout': 0.1, 'hidden_dropout': 0.1, 'max_text_positions': 2048, 'max_mel_positions': 8000, 'prenet_dim': 384, 'postnet_dim': 384, 'prenet_layers': 3, 'postnet_layers': 3, 'phoneme_vocab_size': 2048, 'EOS': 2047, 'pad_val': 1024}}
2024-12-04 06:07:02,646 INFO MainThread:47802 [wandb_init.py:init():669] starting backend
2024-12-04 06:07:02,646 INFO MainThread:47802 [wandb_init.py:init():673] sending inform_init request
2024-12-04 06:07:02,654 INFO MainThread:47802 [backend.py:_multiprocessing_setup():104] multiprocessing start_methods=fork,spawn,forkserver, using: spawn
2024-12-04 06:07:02,655 INFO MainThread:47802 [wandb_init.py:init():686] backend started and connected
2024-12-04 06:07:02,671 INFO MainThread:47802 [wandb_init.py:init():781] updated telemetry
2024-12-04 06:07:02,711 INFO MainThread:47802 [wandb_init.py:init():814] communicating run to backend with 90.0 second timeout
2024-12-04 06:07:03,035 INFO MainThread:47802 [wandb_init.py:init():867] starting run threads in backend
2024-12-04 06:07:03,310 INFO MainThread:47802 [wandb_run.py:_console_start():2456] atexit reg
2024-12-04 06:07:03,310 INFO MainThread:47802 [wandb_run.py:_redirect():2305] redirect: wrap_raw
2024-12-04 06:07:03,311 INFO MainThread:47802 [wandb_run.py:_redirect():2370] Wrapping output streams.
2024-12-04 06:07:03,311 INFO MainThread:47802 [wandb_run.py:_redirect():2395] Redirects installed.
2024-12-04 06:07:03,315 INFO MainThread:47802 [wandb_init.py:init():911] run started, returning control to user process
2024-12-04 06:07:05,437 INFO MainThread:47802 [wandb_watch.py:_watch():71] Watching
2024-12-04 06:07:15,972 INFO MainThread:47802 [wandb_run.py:_config_callback():1387] config_cb None None {'config': {'output_dir': 'logs/s1', 'train': {'seed': 1234, 'epochs': 15, 'batch_size': 8, 'save_every_n_epoch': 5, 'precision': 32, 'if_save_latest': True, 'if_save_every_weights': True, 'exp_name': 'gpt_training', 'half_weights_save_dir': 'weights/s1', 'wandb': {'project': 'gpt-sovits-hindi', 'name': 'stage1_training', 'entity': None, 'log_interval': 100}}, 'optimizer': {'lr_init': 0.0001, 'lr': 0.0004, 'lr_end': 1e-05, 'warmup_steps': 4000, 'decay_steps': 50000}, 'data': {'training_files': 'data8', 'max_sec': 60, 'max_frames': 60, 'filter_length': 2048, 'hop_length': 640, 'win_length': 2048, 'mel_channels': 128, 'mel_fmin': 0.0, 'mel_fmax': None, 'cleaned_text': True, 'num_workers': 4, 'batch_size': 8, 'pad_val': 1024}, 'train_semantic_path': 'data8/semantic.tsv', 'train_phoneme_path': 'data8/phoneme.txt', 'model': {'hidden_dim': 768, 'embedding_dim': 768, 'n_layer': 12, 'head': 12, 'n_embd': 768, 'vocab_size': 2048, 'block_size': 1000, 'embd_pdrop': 0.1, 'resid_pdrop': 0.1, 'attn_pdrop': 0.1, 'semantic_dim': 1024, 'num_layers': 6, 'ffn_hidden': 3072, 'dropout': 0.1, 'attention_dropout': 0.1, 'hidden_dropout': 0.1, 'max_text_positions': 2048, 'max_mel_positions': 8000, 'prenet_dim': 384, 'postnet_dim': 384, 'prenet_layers': 3, 'postnet_layers': 3, 'phoneme_vocab_size': 2048, 'EOS': 2047, 'pad_val': 1024}}, 'output_dir': 'logs/s1', 'is_train': True}
2024-12-04 06:07:15,973 INFO MainThread:47802 [wandb_run.py:_config_callback():1387] config_cb None None {'output_dir': 'logs/s1', 'train': {'seed': 1234, 'epochs': 15, 'batch_size': 8, 'save_every_n_epoch': 5, 'precision': 32, 'if_save_latest': True, 'if_save_every_weights': True, 'exp_name': 'gpt_training', 'half_weights_save_dir': 'weights/s1', 'wandb': {'project': 'gpt-sovits-hindi', 'name': 'stage1_training', 'entity': None, 'log_interval': 100}}, 'optimizer': {'lr_init': 0.0001, 'lr': 0.0004, 'lr_end': 1e-05, 'warmup_steps': 4000, 'decay_steps': 50000}, 'data': {'training_files': 'data8', 'max_sec': 60, 'max_frames': 60, 'filter_length': 2048, 'hop_length': 640, 'win_length': 2048, 'mel_channels': 128, 'mel_fmin': 0.0, 'mel_fmax': None, 'cleaned_text': True, 'num_workers': 4, 'batch_size': 8, 'pad_val': 1024}, 'train_semantic_path': 'data8/semantic.tsv', 'train_phoneme_path': 'data8/phoneme.txt', 'model': {'hidden_dim': 768, 'embedding_dim': 768, 'n_layer': 12, 'head': 12, 'n_embd': 768, 'vocab_size': 2048, 'block_size': 1000, 'embd_pdrop': 0.1, 'resid_pdrop': 0.1, 'attn_pdrop': 0.1, 'semantic_dim': 1024, 'num_layers': 6, 'ffn_hidden': 3072, 'dropout': 0.1, 'attention_dropout': 0.1, 'hidden_dropout': 0.1, 'max_text_positions': 2048, 'max_mel_positions': 8000, 'prenet_dim': 384, 'postnet_dim': 384, 'prenet_layers': 3, 'postnet_layers': 3, 'phoneme_vocab_size': 2048, 'EOS': 2047, 'pad_val': 1024}}
2024-12-04 07:59:52,403 WARNING MsgRouterThr:47802 [router.py:message_loop():75] message_loop has been closed