3outeille's picture
3outeille HF staff
Upload llama-1B/64_GPUS/dp-1_tp-1_pp-64_mbz-4
b47057b verified
========================
START TIME: Sat Jul 6 09:35:07 UTC 2024
python3 version = Python 3.10.14
========================
The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well.
Token is valid (permission: write).
Your token has been saved to /admin/home/ferdinand_mom/.cache/huggingface/token
Login successful
Already on 'bench_cluster'
M examples/config_tiny_llama.py
M examples/config_tiny_llama.yaml
M examples/train_tiny_llama.sh
Your branch is up to date with 'origin/bench_cluster'.
Job status: RUNNING
[2024-07-06 09:35:09,819] torch.distributed.run: [WARNING]
[2024-07-06 09:35:09,819] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:35:09,819] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-07-06 09:35:09,819] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:35:09,820] torch.distributed.run: [WARNING]
[2024-07-06 09:35:09,820] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:35:09,820] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-07-06 09:35:09,820] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:35:09,827] torch.distributed.run: [WARNING]
[2024-07-06 09:35:09,827] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:35:09,827] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-07-06 09:35:09,827] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:35:09,830] torch.distributed.run: [WARNING]
[2024-07-06 09:35:09,830] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:35:09,830] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-07-06 09:35:09,830] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:35:09,831] torch.distributed.run: [WARNING]
[2024-07-06 09:35:09,831] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:35:09,831] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-07-06 09:35:09,831] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:35:09,875] torch.distributed.run: [WARNING]
[2024-07-06 09:35:09,875] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:35:09,875] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-07-06 09:35:09,875] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:35:09,895] torch.distributed.run: [WARNING]
[2024-07-06 09:35:09,895] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:35:09,895] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-07-06 09:35:09,895] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:35:09,937] torch.distributed.run: [WARNING]
[2024-07-06 09:35:09,937] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:35:09,937] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-07-06 09:35:09,937] torch.distributed.run: [WARNING] *****************************************
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Config:
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Config(general=GeneralArgs(project='bench_cluster',
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: run='%date_%jobid',
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: seed=42,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: step=None,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: consumed_train_samples=None,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: benchmark_csv_path=None,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: ignore_sanity_checks=True),
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: parallelism=ParallelismArgs(dp=1,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: pp=64,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: tp=1,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: pp_engine=<nanotron.parallel.pipeline_parallel.engine.AllForwardAllBackwardPipelineEngine object at 0x7f9326ab48b0>,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: tp_mode=<TensorParallelLinearMode.REDUCE_SCATTER: 2>,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: tp_linear_async_communication=False,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: expert_parallel_size=1),
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: model=ModelArgs(model_config=LlamaConfig(bos_token_id=1,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: eos_token_id=2,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: hidden_act='silu',
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: hidden_size=2048,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: initializer_range=0.02,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: intermediate_size=4096,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: is_llama_config=True,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: max_position_embeddings=4096,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: num_attention_heads=32,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: num_hidden_layers=24,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: num_key_value_heads=32,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: pad_token_id=None,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: pretraining_tp=1,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: rms_norm_eps=1e-05,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: rope_scaling=None,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: rope_theta=10000.0,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: tie_word_embeddings=True,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: use_cache=True,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: vocab_size=50257),
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: init_method=RandomInit(std=0.025),
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: dtype=torch.bfloat16,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: make_vocab_size_divisible_by=1,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: ddp_bucket_cap_mb=25),
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: tokenizer=TokenizerArgs(tokenizer_name_or_path='openai-community/gpt2',
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: tokenizer_revision=None,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: tokenizer_max_length=None),
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: checkpoints=CheckpointsArgs(checkpoints_path=PosixPath('/dev/null'),
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: checkpoint_interval=100000,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: save_initial_state=False,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: resume_checkpoint_path=None,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: checkpoints_path_is_shared_file_system=False),
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: logging=LoggingArgs(log_level='info',
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: log_level_replica='info',
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: iteration_step_info_interval=1),
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: tokens=TokensArgs(sequence_length=4096,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: train_steps=20,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: micro_batch_size=4,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: batch_accumulation_per_replica=256,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: val_check_interval=-1,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: limit_val_batches=0,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: limit_test_batches=0),
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: optimizer=OptimizerArgs(optimizer_factory=AdamWOptimizerArgs(adam_eps=1e-08,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: adam_beta1=0.9,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: adam_beta2=0.95,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: torch_adam_is_fused=True,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: name='adamW'),
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: zero_stage=1,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: weight_decay=0.01,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: clip_grad=1.0,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: accumulate_grad_in_fp32=True,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: learning_rate_scheduler=LRSchedulerArgs(learning_rate=0.0001,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: lr_warmup_steps=1,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: lr_warmup_style='linear',
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: lr_decay_style='linear',
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: lr_decay_steps=19,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: lr_decay_starting_step=None,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: min_decay_lr=1e-05)),
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: data_stages=[DatasetStageArgs(name='Training Stage',
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: start_training_step=1,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: data=DataArgs(dataset=PretrainDatasetsArgs(hf_dataset_or_datasets='roneneldan/TinyStories',
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: hf_dataset_splits='train',
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: hf_dataset_config_name=None,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: dataset_processing_num_proc_per_process=64,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: dataset_overwrite_cache=False,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: text_column_name='text'),
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: seed=42,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: num_loading_workers=0))],
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: profiler=ProfilerArgs(profiler_export_path=PosixPath('/fsx/ferdinandmom/ferdinand-hf/bench_cluster/results/llama-1B/64_GPUS/dp-1_tp-1_pp-64_mbz-4')),
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: lighteval=None)
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Model Config:
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: LlamaConfig(bos_token_id=1,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: eos_token_id=2,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: hidden_act='silu',
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: hidden_size=2048,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: initializer_range=0.02,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: intermediate_size=4096,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: is_llama_config=True,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: max_position_embeddings=4096,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: num_attention_heads=32,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: num_hidden_layers=24,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: num_key_value_heads=32,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: pad_token_id=None,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: pretraining_tp=1,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: rms_norm_eps=1e-05,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: rope_scaling=None,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: rope_theta=10000.0,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: tie_word_embeddings=True,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: use_cache=True,
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: vocab_size=50257)
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Building model..
[default0]:07/06/2024 09:35:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Setting PP block ranks...
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=32|TP=0|ip-26-0-165-131]: Local number of parameters: 0 (0.00MiB)
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=32|TP=0|ip-26-0-165-131]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=32|TP=0|ip-26-0-165-131]: No checkpoint path provided.
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=51|TP=0|ip-26-0-173-246]: Local number of parameters: 0 (0.00MiB)
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=51|TP=0|ip-26-0-173-246]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=51|TP=0|ip-26-0-173-246]: No checkpoint path provided.
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=22|TP=0|ip-26-0-161-78]: Local number of parameters: 41.9M (80.01MiB)
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=22|TP=0|ip-26-0-161-78]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=22|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=35|TP=0|ip-26-0-165-131]: Local number of parameters: 0 (0.00MiB)
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=35|TP=0|ip-26-0-165-131]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=35|TP=0|ip-26-0-165-131]: No checkpoint path provided.
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=41|TP=0|ip-26-0-165-59]: Local number of parameters: 0 (0.00MiB)
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=41|TP=0|ip-26-0-165-59]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=41|TP=0|ip-26-0-165-59]: No checkpoint path provided.
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=56|TP=0|ip-26-0-174-36]: Local number of parameters: 0 (0.00MiB)
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=56|TP=0|ip-26-0-174-36]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=56|TP=0|ip-26-0-174-36]: No checkpoint path provided.
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=18|TP=0|ip-26-0-161-78]: Local number of parameters: 41.9M (80.01MiB)
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=38|TP=0|ip-26-0-165-131]: Local number of parameters: 0 (0.00MiB)
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=45|TP=0|ip-26-0-165-59]: Local number of parameters: 0 (0.00MiB)
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=59|TP=0|ip-26-0-174-36]: Local number of parameters: 0 (0.00MiB)
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=59|TP=0|ip-26-0-174-36]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=59|TP=0|ip-26-0-174-36]: No checkpoint path provided.
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=18|TP=0|ip-26-0-161-78]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=18|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=38|TP=0|ip-26-0-165-131]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=38|TP=0|ip-26-0-165-131]: No checkpoint path provided.
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=45|TP=0|ip-26-0-165-59]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=63|TP=0|ip-26-0-174-36]: Local number of parameters: 0 (0.00MiB)
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=63|TP=0|ip-26-0-174-36]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=63|TP=0|ip-26-0-174-36]: No checkpoint path provided.
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=14|TP=0|ip-26-0-161-142]: Local number of parameters: 41.9M (80.01MiB)
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=14|TP=0|ip-26-0-161-142]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=14|TP=0|ip-26-0-161-142]: No checkpoint path provided.
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=17|TP=0|ip-26-0-161-78]: Local number of parameters: 41.9M (80.01MiB)
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=50|TP=0|ip-26-0-173-246]: Local number of parameters: 0 (0.00MiB)
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=33|TP=0|ip-26-0-165-131]: Local number of parameters: 0 (0.00MiB)
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=3|TP=0|ip-26-0-160-192]: Local number of parameters: 41.9M (80.01MiB)
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=3|TP=0|ip-26-0-160-192]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=3|TP=0|ip-26-0-160-192]: No checkpoint path provided.
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=2|TP=0|ip-26-0-160-192]: Local number of parameters: 41.9M (80.01MiB)
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=45|TP=0|ip-26-0-165-59]: No checkpoint path provided.
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=58|TP=0|ip-26-0-174-36]: Local number of parameters: 0 (0.00MiB)
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=58|TP=0|ip-26-0-174-36]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=58|TP=0|ip-26-0-174-36]: No checkpoint path provided.
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=8|TP=0|ip-26-0-161-142]: Local number of parameters: 41.9M (80.01MiB)
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=8|TP=0|ip-26-0-161-142]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=8|TP=0|ip-26-0-161-142]: No checkpoint path provided.
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=17|TP=0|ip-26-0-161-78]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=17|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=50|TP=0|ip-26-0-173-246]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=49|TP=0|ip-26-0-173-246]: Local number of parameters: 0 (0.00MiB)
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=33|TP=0|ip-26-0-165-131]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=33|TP=0|ip-26-0-165-131]: No checkpoint path provided.
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=2|TP=0|ip-26-0-160-192]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=2|TP=0|ip-26-0-160-192]: No checkpoint path provided.
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=40|TP=0|ip-26-0-165-59]: Local number of parameters: 0 (0.00MiB)
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=62|TP=0|ip-26-0-174-36]: Local number of parameters: 0 (0.00MiB)
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=15|TP=0|ip-26-0-161-142]: Local number of parameters: 41.9M (80.01MiB)
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=15|TP=0|ip-26-0-161-142]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=15|TP=0|ip-26-0-161-142]: No checkpoint path provided.
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=19|TP=0|ip-26-0-161-78]: Local number of parameters: 41.9M (80.01MiB)
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=49|TP=0|ip-26-0-173-246]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=34|TP=0|ip-26-0-165-131]: Local number of parameters: 0 (0.00MiB)
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=34|TP=0|ip-26-0-165-131]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=1|TP=0|ip-26-0-160-192]: Local number of parameters: 41.9M (80.01MiB)
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=1|TP=0|ip-26-0-160-192]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=40|TP=0|ip-26-0-165-59]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=62|TP=0|ip-26-0-174-36]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=19|TP=0|ip-26-0-161-78]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=50|TP=0|ip-26-0-173-246]: No checkpoint path provided.
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=34|TP=0|ip-26-0-165-131]: No checkpoint path provided.
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=1|TP=0|ip-26-0-160-192]: No checkpoint path provided.
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=40|TP=0|ip-26-0-165-59]: No checkpoint path provided.
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=44|TP=0|ip-26-0-165-59]: Local number of parameters: 0 (0.00MiB)
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=62|TP=0|ip-26-0-174-36]: No checkpoint path provided.
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=23|TP=0|ip-26-0-161-78]: Local number of parameters: 41.9M (80.01MiB)
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=49|TP=0|ip-26-0-173-246]: No checkpoint path provided.
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=36|TP=0|ip-26-0-165-131]: Local number of parameters: 0 (0.00MiB)
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=36|TP=0|ip-26-0-165-131]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=36|TP=0|ip-26-0-165-131]: No checkpoint path provided.
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=7|TP=0|ip-26-0-160-192]: Local number of parameters: 41.9M (80.01MiB)
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=44|TP=0|ip-26-0-165-59]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=57|TP=0|ip-26-0-174-36]: Local number of parameters: 0 (0.00MiB)
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=23|TP=0|ip-26-0-161-78]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=20|TP=0|ip-26-0-161-78]: Local number of parameters: 41.9M (80.01MiB)
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=52|TP=0|ip-26-0-173-246]: Local number of parameters: 0 (0.00MiB)
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=37|TP=0|ip-26-0-165-131]: Local number of parameters: 0 (0.00MiB)
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=37|TP=0|ip-26-0-165-131]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=37|TP=0|ip-26-0-165-131]: No checkpoint path provided.
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=7|TP=0|ip-26-0-160-192]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=7|TP=0|ip-26-0-160-192]: No checkpoint path provided.
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=44|TP=0|ip-26-0-165-59]: No checkpoint path provided.
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=60|TP=0|ip-26-0-174-36]: Local number of parameters: 0 (0.00MiB)
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=13|TP=0|ip-26-0-161-142]: Local number of parameters: 41.9M (80.01MiB)
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=20|TP=0|ip-26-0-161-78]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=52|TP=0|ip-26-0-173-246]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=39|TP=0|ip-26-0-165-131]: Local number of parameters: 0 (0.00MiB)
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=39|TP=0|ip-26-0-165-131]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Total number of parameters: 1.21G (2312.82MiB)
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Local number of parameters: 145M (276.32MiB)
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [After model building] Memory usage: 277.33MiB. Peak allocated: 279.36MiB Peak reserved: 294.00MiB
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: No checkpoint path provided.
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Parametrizing model parameters using StandardParametrizator
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=47|TP=0|ip-26-0-165-59]: Local number of parameters: 0 (0.00MiB)
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=31|TP=0|ip-26-0-163-134]: Local number of parameters: 0 (0.00MiB)
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=31|TP=0|ip-26-0-163-134]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=31|TP=0|ip-26-0-163-134]: No checkpoint path provided.
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=60|TP=0|ip-26-0-174-36]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=57|TP=0|ip-26-0-174-36]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=57|TP=0|ip-26-0-174-36]: No checkpoint path provided.
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=13|TP=0|ip-26-0-161-142]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=23|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=52|TP=0|ip-26-0-173-246]: No checkpoint path provided.
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=39|TP=0|ip-26-0-165-131]: No checkpoint path provided.
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=47|TP=0|ip-26-0-165-59]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=29|TP=0|ip-26-0-163-134]: Local number of parameters: 0 (0.00MiB)
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=60|TP=0|ip-26-0-174-36]: No checkpoint path provided.
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=13|TP=0|ip-26-0-161-142]: No checkpoint path provided.
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=21|TP=0|ip-26-0-161-78]: Local number of parameters: 41.9M (80.01MiB)
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=55|TP=0|ip-26-0-173-246]: Local number of parameters: 0 (0.00MiB)
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=43|TP=0|ip-26-0-165-59]: Local number of parameters: 0 (0.00MiB)
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=30|TP=0|ip-26-0-163-134]: Local number of parameters: 0 (0.00MiB)
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=24|TP=0|ip-26-0-163-134]: Local number of parameters: 2.05K (0.00MiB)
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=61|TP=0|ip-26-0-174-36]: Local number of parameters: 0 (0.00MiB)
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=9|TP=0|ip-26-0-161-142]: Local number of parameters: 41.9M (80.01MiB)
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=21|TP=0|ip-26-0-161-78]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=20|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=55|TP=0|ip-26-0-173-246]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=55|TP=0|ip-26-0-173-246]: No checkpoint path provided.
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=43|TP=0|ip-26-0-165-59]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=30|TP=0|ip-26-0-163-134]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=30|TP=0|ip-26-0-163-134]: No checkpoint path provided.
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=61|TP=0|ip-26-0-174-36]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=11|TP=0|ip-26-0-161-142]: Local number of parameters: 41.9M (80.01MiB)
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=21|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=19|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=53|TP=0|ip-26-0-173-246]: Local number of parameters: 0 (0.00MiB)
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=53|TP=0|ip-26-0-173-246]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=53|TP=0|ip-26-0-173-246]: No checkpoint path provided.
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=4|TP=0|ip-26-0-160-192]: Local number of parameters: 41.9M (80.01MiB)
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=43|TP=0|ip-26-0-165-59]: No checkpoint path provided.
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=28|TP=0|ip-26-0-163-134]: Local number of parameters: 0 (0.00MiB)
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=61|TP=0|ip-26-0-174-36]: No checkpoint path provided.
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=11|TP=0|ip-26-0-161-142]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=9|TP=0|ip-26-0-161-142]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=16|TP=0|ip-26-0-161-78]: Local number of parameters: 41.9M (80.01MiB)
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=16|TP=0|ip-26-0-161-78]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=54|TP=0|ip-26-0-173-246]: Local number of parameters: 0 (0.00MiB)
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=6|TP=0|ip-26-0-160-192]: Local number of parameters: 41.9M (80.01MiB)
[default7]:07/06/2024 09:35:47 [INFO|DP=0|PP=47|TP=0|ip-26-0-165-59]: No checkpoint path provided.
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=26|TP=0|ip-26-0-163-134]: Local number of parameters: 0 (0.00MiB)
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=26|TP=0|ip-26-0-163-134]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=9|TP=0|ip-26-0-161-142]: No checkpoint path provided.
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=11|TP=0|ip-26-0-161-142]: No checkpoint path provided.
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=16|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=54|TP=0|ip-26-0-173-246]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=54|TP=0|ip-26-0-173-246]: No checkpoint path provided.
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=6|TP=0|ip-26-0-160-192]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=4|TP=0|ip-26-0-160-192]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=4|TP=0|ip-26-0-160-192]: No checkpoint path provided.
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=6|TP=0|ip-26-0-160-192]: No checkpoint path provided.
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=42|TP=0|ip-26-0-165-59]: Local number of parameters: 0 (0.00MiB)
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=42|TP=0|ip-26-0-165-59]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=26|TP=0|ip-26-0-163-134]: No checkpoint path provided.
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=10|TP=0|ip-26-0-161-142]: Local number of parameters: 41.9M (80.01MiB)
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=48|TP=0|ip-26-0-173-246]: Local number of parameters: 0 (0.00MiB)
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=48|TP=0|ip-26-0-173-246]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=48|TP=0|ip-26-0-173-246]: No checkpoint path provided.
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=5|TP=0|ip-26-0-160-192]: Local number of parameters: 41.9M (80.01MiB)
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=5|TP=0|ip-26-0-160-192]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=5|TP=0|ip-26-0-160-192]: No checkpoint path provided.
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=42|TP=0|ip-26-0-165-59]: No checkpoint path provided.
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=24|TP=0|ip-26-0-163-134]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=10|TP=0|ip-26-0-161-142]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default2]:07/06/2024 09:35:47 [INFO|DP=0|PP=10|TP=0|ip-26-0-161-142]: No checkpoint path provided.
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=46|TP=0|ip-26-0-165-59]: Local number of parameters: 0 (0.00MiB)
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=24|TP=0|ip-26-0-163-134]: No checkpoint path provided.
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=12|TP=0|ip-26-0-161-142]: Local number of parameters: 41.9M (80.01MiB)
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=12|TP=0|ip-26-0-161-142]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=12|TP=0|ip-26-0-161-142]: No checkpoint path provided.
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=46|TP=0|ip-26-0-165-59]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default6]:07/06/2024 09:35:47 [INFO|DP=0|PP=46|TP=0|ip-26-0-165-59]: No checkpoint path provided.
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=29|TP=0|ip-26-0-163-134]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default5]:07/06/2024 09:35:47 [INFO|DP=0|PP=29|TP=0|ip-26-0-163-134]: No checkpoint path provided.
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=28|TP=0|ip-26-0-163-134]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=27|TP=0|ip-26-0-163-134]: Local number of parameters: 0 (0.00MiB)
[default4]:07/06/2024 09:35:47 [INFO|DP=0|PP=28|TP=0|ip-26-0-163-134]: No checkpoint path provided.
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=27|TP=0|ip-26-0-163-134]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default3]:07/06/2024 09:35:47 [INFO|DP=0|PP=27|TP=0|ip-26-0-163-134]: No checkpoint path provided.
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=25|TP=0|ip-26-0-163-134]: Local number of parameters: 103M (196.32MiB)
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=25|TP=0|ip-26-0-163-134]: [After model building] Memory usage: 196.33MiB. Peak allocated: 196.35MiB Peak reserved: 200.00MiB
[default1]:07/06/2024 09:35:47 [INFO|DP=0|PP=25|TP=0|ip-26-0-163-134]: No checkpoint path provided.
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [Optimizer Building] Using LearningRateForSP as learning rate
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [ZeRO sharding] Size of optimizer params per rank:
[default0]:07/06/2024 09:35:47 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [ZeRO sharding] DP Rank 0 has 145M out of 145M (100.00%) params' optimizer states
[default0]:07/06/2024 09:35:48 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [Training Plan] Stage Training Stage has 19 remaining training steps and has consumed 0 samples
[default0]:07/06/2024 09:35:48 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Using `datasets` library
[default0]:07/06/2024 09:35:48 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Loading tokenizer from openai-community/gpt2 and transformers/hf_hub versions ('4.41.2', '0.23.4')
[default1]:Traceback (most recent call last):
[default1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 233, in <module>
[default1]: trainer = DistributedTrainer(config_file)
[default1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 185, in __init__
[default1]: self.optimizer, self.grad_accumulator = init_optimizer_and_grad_accumulator(
[default1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/helpers.py", line 401, in init_optimizer_and_grad_accumulator
[default1]: param = model.get_parameter(optim_model_param_name)
[default1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 714, in get_parameter
[default1]: mod: torch.nn.Module = self.get_submodule(module_path)
[default1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 681, in get_submodule
[default1]: raise AttributeError(mod._get_name() + " has no "
[default1]:AttributeError: PipelineBlock has no attribute `pp_block`
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/06/2024 09:35:48 [WARNING|DP=0|PP=0|TP=0|ip-26-0-160-192]: Repo card metadata block was not found. Setting CardData to empty.
[2024-07-06 09:35:52,213] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3622861 closing signal SIGTERM
[2024-07-06 09:35:52,214] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3622863 closing signal SIGTERM
[2024-07-06 09:35:52,214] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3622864 closing signal SIGTERM
[2024-07-06 09:35:52,215] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3622865 closing signal SIGTERM
[2024-07-06 09:35:52,215] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3622866 closing signal SIGTERM
[2024-07-06 09:35:52,216] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3622867 closing signal SIGTERM
[2024-07-06 09:35:52,216] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3622868 closing signal SIGTERM
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/06/2024 09:35:52 [WARNING|DP=0|PP=14|TP=0|ip-26-0-161-142]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/06/2024 09:35:52 [WARNING|DP=0|PP=15|TP=0|ip-26-0-161-142]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/06/2024 09:35:52 [WARNING|DP=0|PP=8|TP=0|ip-26-0-161-142]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/06/2024 09:35:52 [WARNING|DP=0|PP=11|TP=0|ip-26-0-161-142]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/06/2024 09:35:52 [WARNING|DP=0|PP=13|TP=0|ip-26-0-161-142]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/06/2024 09:35:52 [WARNING|DP=0|PP=12|TP=0|ip-26-0-161-142]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/06/2024 09:35:52 [WARNING|DP=0|PP=10|TP=0|ip-26-0-161-142]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/06/2024 09:35:52 [WARNING|DP=0|PP=9|TP=0|ip-26-0-161-142]: Repo card metadata block was not found. Setting CardData to empty.
[2024-07-06 09:35:54,231] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 1 (pid: 3622862) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 812, in main
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-07-06_09:35:52
host : ip-26-0-163-134.ec2.internal
rank : 25 (local_rank: 1)
exitcode : 1 (pid: 3622862)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
srun: error: ip-26-0-163-134: task 3: Exited with exit code 1
[default0]:[rank16]:[E ProcessGroupNCCL.cpp:537] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default0]:[rank16]:[E ProcessGroupNCCL.cpp:543] To avoid data inconsistency, we are taking the entire process down.
[default0]:[rank16]:[E ProcessGroupNCCL.cpp:1182] [Rank 16] NCCL watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.19.3
[default0]:ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[default0]:Last error:
[default0]:socketProgress: Connection closed by remote peer ip-26-0-163-134.ec2.internal<53538>
[default0]:Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1436 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f143698fd87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::vector<std::shared_ptr<c10d::NCCLComm>, std::allocator<std::shared_ptr<c10d::NCCLComm> > > const&) + 0x2f3 (0x7f1437b36fa3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7f1437b3727b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x17d (0x7f1437b3ac1d in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7f1437b3b839 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #5: <unknown function> + 0xd3e95 (0x7f148183fe95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #6: <unknown function> + 0x8609 (0x7f1486947609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #7: clone + 0x43 (0x7f1486712353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default0]:terminate called after throwing an instance of 'c10::DistBackendError'
[default0]: what(): [Rank 16] NCCL watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.19.3
[default0]:ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[default0]:Last error:
[default0]:socketProgress: Connection closed by remote peer ip-26-0-163-134.ec2.internal<53538>
[default0]:Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1436 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f143698fd87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::vector<std::shared_ptr<c10d::NCCLComm>, std::allocator<std::shared_ptr<c10d::NCCLComm> > > const&) + 0x2f3 (0x7f1437b36fa3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7f1437b3727b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x17d (0x7f1437b3ac1d in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7f1437b3b839 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #5: <unknown function> + 0xd3e95 (0x7f148183fe95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #6: <unknown function> + 0x8609 (0x7f1486947609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #7: clone + 0x43 (0x7f1486712353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default0]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1186 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f143698fd87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: <unknown function> + 0xdf6b11 (0x7f1437891b11 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: <unknown function> + 0xd3e95 (0x7f148183fe95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #3: <unknown function> + 0x8609 (0x7f1486947609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #4: clone + 0x43 (0x7f1486712353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/06/2024 09:35:59 [WARNING|DP=0|PP=38|TP=0|ip-26-0-165-131]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/06/2024 09:35:59 [WARNING|DP=0|PP=34|TP=0|ip-26-0-165-131]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/06/2024 09:35:59 [WARNING|DP=0|PP=35|TP=0|ip-26-0-165-131]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/06/2024 09:35:59 [WARNING|DP=0|PP=36|TP=0|ip-26-0-165-131]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/06/2024 09:35:59 [WARNING|DP=0|PP=37|TP=0|ip-26-0-165-131]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/06/2024 09:36:00 [WARNING|DP=0|PP=39|TP=0|ip-26-0-165-131]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/06/2024 09:36:00 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [Training Plan] There are 1 training stages
[default0]:07/06/2024 09:36:00 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [Stage Training Stage] start from step 1
[default0]:07/06/2024 09:36:00 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]:
[default0]:07/06/2024 09:36:00 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [Start training] datetime: 2024-07-06 09:36:00.548798 | mbs: 4 | grad_accum: 256 | global_batch_size: 1024 | sequence_length: 4096 | train_steps: 20 | start_iteration_step: 0 | consumed_train_samples: 0
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/06/2024 09:36:00 [WARNING|DP=0|PP=51|TP=0|ip-26-0-173-246]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/06/2024 09:36:00 [WARNING|DP=0|PP=33|TP=0|ip-26-0-165-131]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/06/2024 09:36:00 [WARNING|DP=0|PP=43|TP=0|ip-26-0-165-59]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/06/2024 09:36:00 [WARNING|DP=0|PP=47|TP=0|ip-26-0-165-59]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/06/2024 09:36:00 [WARNING|DP=0|PP=56|TP=0|ip-26-0-174-36]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/06/2024 09:36:00 [WARNING|DP=0|PP=41|TP=0|ip-26-0-165-59]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/06/2024 09:36:00 [WARNING|DP=0|PP=44|TP=0|ip-26-0-165-59]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/06/2024 09:36:00 [WARNING|DP=0|PP=59|TP=0|ip-26-0-174-36]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/06/2024 09:36:00 [WARNING|DP=0|PP=40|TP=0|ip-26-0-165-59]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/06/2024 09:36:00 [WARNING|DP=0|PP=42|TP=0|ip-26-0-165-59]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/06/2024 09:36:00 [WARNING|DP=0|PP=58|TP=0|ip-26-0-174-36]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/06/2024 09:36:00 [WARNING|DP=0|PP=49|TP=0|ip-26-0-173-246]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/06/2024 09:36:00 [WARNING|DP=0|PP=50|TP=0|ip-26-0-173-246]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/06/2024 09:36:00 [WARNING|DP=0|PP=54|TP=0|ip-26-0-173-246]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/06/2024 09:36:00 [WARNING|DP=0|PP=60|TP=0|ip-26-0-174-36]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/06/2024 09:36:00 [WARNING|DP=0|PP=46|TP=0|ip-26-0-165-59]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/06/2024 09:36:00 [WARNING|DP=0|PP=57|TP=0|ip-26-0-174-36]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/06/2024 09:36:00 [WARNING|DP=0|PP=1|TP=0|ip-26-0-160-192]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/06/2024 09:36:00 [WARNING|DP=0|PP=61|TP=0|ip-26-0-174-36]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/06/2024 09:36:00 [WARNING|DP=0|PP=48|TP=0|ip-26-0-173-246]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/06/2024 09:36:00 [WARNING|DP=0|PP=53|TP=0|ip-26-0-173-246]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/06/2024 09:36:00 [WARNING|DP=0|PP=3|TP=0|ip-26-0-160-192]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/06/2024 09:36:00 [WARNING|DP=0|PP=7|TP=0|ip-26-0-160-192]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/06/2024 09:36:00 [WARNING|DP=0|PP=2|TP=0|ip-26-0-160-192]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/06/2024 09:36:00 [WARNING|DP=0|PP=4|TP=0|ip-26-0-160-192]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/06/2024 09:36:00 [WARNING|DP=0|PP=5|TP=0|ip-26-0-160-192]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/06/2024 09:36:00 [WARNING|DP=0|PP=6|TP=0|ip-26-0-160-192]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/06/2024 09:36:00 [WARNING|DP=0|PP=45|TP=0|ip-26-0-165-59]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/06/2024 09:36:00 [WARNING|DP=0|PP=62|TP=0|ip-26-0-174-36]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/06/2024 09:36:00 [WARNING|DP=0|PP=63|TP=0|ip-26-0-174-36]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/06/2024 09:36:00 [WARNING|DP=0|PP=55|TP=0|ip-26-0-173-246]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/06/2024 09:36:00 [WARNING|DP=0|PP=32|TP=0|ip-26-0-165-131]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/06/2024 09:36:01 [WARNING|DP=0|PP=52|TP=0|ip-26-0-173-246]: Repo card metadata block was not found. Setting CardData to empty.
[2024-07-06 09:36:02,221] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 228469 closing signal SIGTERM
[2024-07-06 09:36:02,222] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 228470 closing signal SIGTERM
[2024-07-06 09:36:02,222] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 228471 closing signal SIGTERM
[2024-07-06 09:36:02,223] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 228472 closing signal SIGTERM
[2024-07-06 09:36:02,223] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 228473 closing signal SIGTERM
[2024-07-06 09:36:02,223] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 228474 closing signal SIGTERM
[2024-07-06 09:36:02,224] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 228475 closing signal SIGTERM
[2024-07-06 09:36:04,240] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: -6) local_rank: 0 (pid: 228468) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 812, in main
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-07-06_09:36:02
host : ip-26-0-161-78.ec2.internal
rank : 16 (local_rank: 0)
exitcode : -6 (pid: 228468)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 228468
============================================================
srun: error: ip-26-0-161-78: task 1: Exited with exit code 1
[default1]:[rank33]:[E ProcessGroupNCCL.cpp:537] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default1]:[rank33]:[E ProcessGroupNCCL.cpp:543] To avoid data inconsistency, we are taking the entire process down.
[default1]:[rank33]:[E ProcessGroupNCCL.cpp:1182] [Rank 33] NCCL watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.19.3
[default1]:ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[default1]:Last error:
[default1]:socketProgress: Connection closed by remote peer ip-26-0-161-78.ec2.internal<41998>
[default1]:Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1436 (most recent call first):
[default1]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fcf83ec1d87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default1]:frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::vector<std::shared_ptr<c10d::NCCLComm>, std::allocator<std::shared_ptr<c10d::NCCLComm> > > const&) + 0x2f3 (0x7fcf85068fa3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7fcf8506927b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x17d (0x7fcf8506cc1d in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7fcf8506d839 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #5: <unknown function> + 0xd3e95 (0x7fcfced71e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default1]:frame #6: <unknown function> + 0x8609 (0x7fcfd3e79609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default1]:frame #7: clone + 0x43 (0x7fcfd3c44353 in /lib/x86_64-linux-gnu/libc.so.6)
[default1]:
[default1]:terminate called after throwing an instance of 'c10::DistBackendError'
[default1]: what(): [Rank 33] NCCL watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.19.3
[default1]:ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[default1]:Last error:
[default1]:socketProgress: Connection closed by remote peer ip-26-0-161-78.ec2.internal<41998>
[default1]:Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1436 (most recent call first):
[default1]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fcf83ec1d87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default1]:frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::vector<std::shared_ptr<c10d::NCCLComm>, std::allocator<std::shared_ptr<c10d::NCCLComm> > > const&) + 0x2f3 (0x7fcf85068fa3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7fcf8506927b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x17d (0x7fcf8506cc1d in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7fcf8506d839 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #5: <unknown function> + 0xd3e95 (0x7fcfced71e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default1]:frame #6: <unknown function> + 0x8609 (0x7fcfd3e79609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default1]:frame #7: clone + 0x43 (0x7fcfd3c44353 in /lib/x86_64-linux-gnu/libc.so.6)
[default1]:
[default1]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1186 (most recent call first):
[default1]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fcf83ec1d87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default1]:frame #1: <unknown function> + 0xdf6b11 (0x7fcf84dc3b11 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #2: <unknown function> + 0xd3e95 (0x7fcfced71e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default1]:frame #3: <unknown function> + 0x8609 (0x7fcfd3e79609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default1]:frame #4: clone + 0x43 (0x7fcfd3c44353 in /lib/x86_64-linux-gnu/libc.so.6)
[default1]:
[default0]:[rank8]:[E ProcessGroupNCCL.cpp:537] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default0]:[rank8]:[E ProcessGroupNCCL.cpp:543] To avoid data inconsistency, we are taking the entire process down.
[default0]:[rank8]:[E ProcessGroupNCCL.cpp:1182] [Rank 8] NCCL watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.19.3
[default0]:ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[default0]:Last error:
[default0]:socketProgress: Connection closed by remote peer ip-26-0-161-78.ec2.internal<40216>
[default0]:Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1436 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fd69d558d87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::vector<std::shared_ptr<c10d::NCCLComm>, std::allocator<std::shared_ptr<c10d::NCCLComm> > > const&) + 0x2f3 (0x7fd69e6fffa3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7fd69e70027b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x17d (0x7fd69e703c1d in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7fd69e704839 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #5: <unknown function> + 0xd3e95 (0x7fd6e8408e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #6: <unknown function> + 0x8609 (0x7fd6ed510609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #7: clone + 0x43 (0x7fd6ed2db353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default0]:terminate called after throwing an instance of 'c10::DistBackendError'
[default0]: what(): [Rank 8] NCCL watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.19.3
[default0]:ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[default0]:Last error:
[default0]:socketProgress: Connection closed by remote peer ip-26-0-161-78.ec2.internal<40216>
[default0]:Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1436 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fd69d558d87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::vector<std::shared_ptr<c10d::NCCLComm>, std::allocator<std::shared_ptr<c10d::NCCLComm> > > const&) + 0x2f3 (0x7fd69e6fffa3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7fd69e70027b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x17d (0x7fd69e703c1d in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7fd69e704839 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #5: <unknown function> + 0xd3e95 (0x7fd6e8408e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #6: <unknown function> + 0x8609 (0x7fd6ed510609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #7: clone + 0x43 (0x7fd6ed2db353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default0]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1186 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fd69d558d87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: <unknown function> + 0xdf6b11 (0x7fd69e45ab11 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: <unknown function> + 0xd3e95 (0x7fd6e8408e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #3: <unknown function> + 0x8609 (0x7fd6ed510609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #4: clone + 0x43 (0x7fd6ed2db353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[2024-07-06 09:36:12,236] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 646504 closing signal SIGTERM
[2024-07-06 09:36:12,237] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 646505 closing signal SIGTERM
[2024-07-06 09:36:12,237] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 646506 closing signal SIGTERM
[2024-07-06 09:36:12,238] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 646507 closing signal SIGTERM
[2024-07-06 09:36:12,239] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 646508 closing signal SIGTERM
[2024-07-06 09:36:12,240] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 646509 closing signal SIGTERM
[2024-07-06 09:36:12,240] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 646510 closing signal SIGTERM
[2024-07-06 09:36:14,658] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: -6) local_rank: 0 (pid: 646503) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 812, in main
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-07-06_09:36:12
host : ip-26-0-161-142.ec2.internal
rank : 8 (local_rank: 0)
exitcode : -6 (pid: 646503)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 646503
============================================================
srun: error: ip-26-0-161-142: task 2: Exited with exit code 1
[2024-07-06 09:36:32,245] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 329249 closing signal SIGTERM
[2024-07-06 09:36:32,245] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 329251 closing signal SIGTERM
[2024-07-06 09:36:32,246] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 329252 closing signal SIGTERM
[2024-07-06 09:36:32,247] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 329253 closing signal SIGTERM
[2024-07-06 09:36:32,247] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 329254 closing signal SIGTERM
[2024-07-06 09:36:32,248] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 329255 closing signal SIGTERM
[2024-07-06 09:36:32,248] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 329256 closing signal SIGTERM
[default0]:07/06/2024 09:36:33 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Resuming training from stage Training Stage, it has trained for 0 samples and has 19 remaining train steps
[default0]:07/06/2024 09:36:33 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Memory usage: 1382.63MiB. Peak allocated 1382.63MiB. Peak reserved: 1402.00MiB
[2024-07-06 09:36:34,179] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: -6) local_rank: 1 (pid: 329250) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 812, in main
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-07-06_09:36:32
host : ip-26-0-165-131.ec2.internal
rank : 33 (local_rank: 1)
exitcode : -6 (pid: 329250)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 329250
============================================================
srun: error: ip-26-0-165-131: task 5: Exited with exit code 1
[default0]:[rank48]:[E ProcessGroupNCCL.cpp:537] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default0]:[rank48]:[E ProcessGroupNCCL.cpp:543] To avoid data inconsistency, we are taking the entire process down.
[default0]:[rank48]:[E ProcessGroupNCCL.cpp:1182] [Rank 48] NCCL watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.19.3
[default0]:ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[default0]:Last error:
[default0]:socketProgress: Connection closed by remote peer ip-26-0-165-131.ec2.internal<53908>
[default0]:Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1436 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f2cad4edd87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::vector<std::shared_ptr<c10d::NCCLComm>, std::allocator<std::shared_ptr<c10d::NCCLComm> > > const&) + 0x2f3 (0x7f2cae694fa3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7f2cae69527b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x17d (0x7f2cae698c1d in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7f2cae699839 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #5: <unknown function> + 0xd3e95 (0x7f2cf839de95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #6: <unknown function> + 0x8609 (0x7f2cfd4a5609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #7: clone + 0x43 (0x7f2cfd270353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default0]:terminate called after throwing an instance of 'c10::DistBackendError'
[default0]: what(): [Rank 48] NCCL watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.19.3
[default0]:ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[default0]:Last error:
[default0]:socketProgress: Connection closed by remote peer ip-26-0-165-131.ec2.internal<53908>
[default0]:Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1436 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f2cad4edd87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::vector<std::shared_ptr<c10d::NCCLComm>, std::allocator<std::shared_ptr<c10d::NCCLComm> > > const&) + 0x2f3 (0x7f2cae694fa3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7f2cae69527b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x17d (0x7f2cae698c1d in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7f2cae699839 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #5: <unknown function> + 0xd3e95 (0x7f2cf839de95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #6: <unknown function> + 0x8609 (0x7f2cfd4a5609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #7: clone + 0x43 (0x7f2cfd270353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default0]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1186 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f2cad4edd87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: <unknown function> + 0xdf6b11 (0x7f2cae3efb11 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: <unknown function> + 0xd3e95 (0x7f2cf839de95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #3: <unknown function> + 0x8609 (0x7f2cfd4a5609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #4: clone + 0x43 (0x7f2cfd270353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default0]:[rank0]:[E ProcessGroupNCCL.cpp:537] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default0]:[rank0]:[E ProcessGroupNCCL.cpp:543] To avoid data inconsistency, we are taking the entire process down.
[default0]:[rank0]:[E ProcessGroupNCCL.cpp:1182] [Rank 0] NCCL watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.19.3
[default0]:ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[default0]:Last error:
[default0]:socketProgress: Connection closed by remote peer ip-26-0-165-131.ec2.internal<52874>
[default0]:Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1436 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f9398119d87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::vector<std::shared_ptr<c10d::NCCLComm>, std::allocator<std::shared_ptr<c10d::NCCLComm> > > const&) + 0x2f3 (0x7f93992c0fa3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7f93992c127b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x17d (0x7f93992c4c1d in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7f93992c5839 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #5: <unknown function> + 0xd3e95 (0x7f93e2fc9e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #6: <unknown function> + 0x8609 (0x7f93e80d1609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #7: clone + 0x43 (0x7f93e7e9c353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default0]:terminate called after throwing an instance of 'c10::DistBackendError'
[default0]: what(): [Rank 0] NCCL watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.19.3
[default0]:ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[default0]:Last error:
[default0]:socketProgress: Connection closed by remote peer ip-26-0-165-131.ec2.internal<52874>
[default0]:Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1436 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f9398119d87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::vector<std::shared_ptr<c10d::NCCLComm>, std::allocator<std::shared_ptr<c10d::NCCLComm> > > const&) + 0x2f3 (0x7f93992c0fa3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7f93992c127b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x17d (0x7f93992c4c1d in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7f93992c5839 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #5: <unknown function> + 0xd3e95 (0x7f93e2fc9e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #6: <unknown function> + 0x8609 (0x7f93e80d1609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #7: clone + 0x43 (0x7f93e7e9c353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default0]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1186 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f9398119d87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: <unknown function> + 0xdf6b11 (0x7f939901bb11 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: <unknown function> + 0xd3e95 (0x7f93e2fc9e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #3: <unknown function> + 0x8609 (0x7f93e80d1609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #4: clone + 0x43 (0x7f93e7e9c353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default6]:Traceback (most recent call last):
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default6]: trainer.train(dataloader)
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 430, in train
[default6]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 459, in training_step
[default6]: outputs = self.pipeline_engine.train_batch_iter(
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 187, in train_batch_iter
[default6]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default6]: output = model(**micro_batch)
[default6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default6]: return self._call_impl(*args, **kwargs)
[default6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default6]: return forward_call(*args, **kwargs)
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 890, in forward
[default6]: sharded_logits = self.model(
[default6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default6]: return self._call_impl(*args, **kwargs)
[default6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default6]: return forward_call(*args, **kwargs)
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default6]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default6]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default6]: return self._call_impl(*args, **kwargs)
[default6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default6]: return forward_call(*args, **kwargs)
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward
[default6]: new_kwargs[name] = recv_from_pipeline_state_buffer(
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer
[default6]: pipeline_state.run_communication()
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication
[default6]: recv_activation_tensor = recv_activation()
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__
[default6]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0]
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors
[default6]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag)
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors
[default6]: meta = self._recv_meta(from_rank=from_rank, tag=tag)
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 246, in _recv_meta
[default6]: dist.recv(
[default6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 72, in wrapper
[default6]: return func(*args, **kwargs)
[default6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1706, in recv
[default6]: pg.recv([tensor], group_src_rank, tag).wait()
[default6]:torch.distributed.DistBackendError: [6] is setting up NCCL communicator and retrieving ncclUniqueId from [0] via c10d key-value store by key '5:6', but store->get('5:6') got error: Connection reset by peer
[default6]:Exception raised from recvBytes at ../torch/csrc/distributed/c10d/Utils.hpp:670 (most recent call first):
[default6]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f6827680d87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default6]:frame #1: <unknown function> + 0x589518e (0x7f685f63a18e in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #2: c10d::TCPStore::doWait(c10::ArrayRef<std::string>, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x360 (0x7f685f6349a0 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #3: c10d::TCPStore::doGet(std::string const&) + 0x32 (0x7f685f634ce2 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #4: c10d::TCPStore::get(std::string const&) + 0xa1 (0x7f685f635b11 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #5: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f685f5eaf81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #6: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f685f5eaf81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #7: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f685f5eaf81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #8: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f685f5eaf81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #9: c10d::ProcessGroupNCCL::broadcastUniqueNCCLID(ncclUniqueId*, bool, std::string const&, int) + 0xa9 (0x7f6828828c69 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #10: c10d::ProcessGroupNCCL::getNCCLComm(std::string const&, std::vector<c10::Device, std::allocator<c10::Device> > const&, c10d::OpType, int, bool) + 0x22b (0x7f682882fc5b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #11: c10d::ProcessGroupNCCL::recv(std::vector<at::Tensor, std::allocator<at::Tensor> >&, int, int) + 0x550 (0x7f6828852b60 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #12: <unknown function> + 0x5838439 (0x7f685f5dd439 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #13: <unknown function> + 0x5843330 (0x7f685f5e8330 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #14: <unknown function> + 0x58433c5 (0x7f685f5e83c5 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #15: <unknown function> + 0x4e893cc (0x7f685ec2e3cc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #16: <unknown function> + 0x1a08a88 (0x7f685b7ada88 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #17: <unknown function> + 0x5849a84 (0x7f685f5eea84 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #18: <unknown function> + 0x584ed35 (0x7f685f5f3d35 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #19: <unknown function> + 0xc97eee (0x7f6871ea5eee in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
[default6]:frame #20: <unknown function> + 0x413ea4 (0x7f6871621ea4 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
[default6]:frame #21: <unknown function> + 0x1445a6 (0x563fe05bb5a6 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #22: _PyObject_MakeTpCall + 0x26b (0x563fe05b4a6b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #23: <unknown function> + 0x150866 (0x563fe05c7866 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #24: _PyEval_EvalFrameDefault + 0x4c12 (0x563fe05b0142 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #25: _PyFunction_Vectorcall + 0x6c (0x563fe05bba2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #26: PyObject_Call + 0xbc (0x563fe05c7f1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #27: _PyEval_EvalFrameDefault + 0x2d83 (0x563fe05ae2b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #28: _PyFunction_Vectorcall + 0x6c (0x563fe05bba2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #29: _PyEval_EvalFrameDefault + 0x13ca (0x563fe05ac8fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #30: <unknown function> + 0x150582 (0x563fe05c7582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #31: _PyEval_EvalFrameDefault + 0x13ca (0x563fe05ac8fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #32: <unknown function> + 0x150582 (0x563fe05c7582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #33: _PyEval_EvalFrameDefault + 0x13ca (0x563fe05ac8fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #34: <unknown function> + 0x150582 (0x563fe05c7582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #35: _PyEval_EvalFrameDefault + 0x13ca (0x563fe05ac8fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #36: _PyObject_FastCallDictTstate + 0xd0 (0x563fe05b3f50 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #37: _PyObject_Call_Prepend + 0x69 (0x563fe05c5c39 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #38: <unknown function> + 0x211239 (0x563fe0688239 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #39: _PyObject_MakeTpCall + 0x26b (0x563fe05b4a6b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #40: _PyEval_EvalFrameDefault + 0x4eb6 (0x563fe05b03e6 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #41: _PyFunction_Vectorcall + 0x6c (0x563fe05bba2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #42: _PyEval_EvalFrameDefault + 0x72c (0x563fe05abc5c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #43: _PyFunction_Vectorcall + 0x6c (0x563fe05bba2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #44: _PyEval_EvalFrameDefault + 0x13ca (0x563fe05ac8fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #45: <unknown function> + 0x150582 (0x563fe05c7582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #46: PyObject_Call + 0xbc (0x563fe05c7f1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #47: _PyEval_EvalFrameDefault + 0x2d83 (0x563fe05ae2b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #48: <unknown function> + 0x150582 (0x563fe05c7582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #49: PyObject_Call + 0xbc (0x563fe05c7f1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #50: _PyEval_EvalFrameDefault + 0x2d83 (0x563fe05ae2b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #51: _PyFunction_Vectorcall + 0x6c (0x563fe05bba2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #52: _PyObject_FastCallDictTstate + 0x187 (0x563fe05b4007 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #53: _PyObject_Call_Prepend + 0x69 (0x563fe05c5c39 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #54: <unknown function> + 0x211239 (0x563fe0688239 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #55: PyObject_Call + 0x207 (0x563fe05c8067 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #56: _PyEval_EvalFrameDefault + 0x2d83 (0x563fe05ae2b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #57: <unknown function> + 0x150582 (0x563fe05c7582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #58: _PyEval_EvalFrameDefault + 0x13ca (0x563fe05ac8fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #59: <unknown function> + 0x150582 (0x563fe05c7582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #60: PyObject_Call + 0xbc (0x563fe05c7f1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #61: _PyEval_EvalFrameDefault + 0x2d83 (0x563fe05ae2b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #62: <unknown function> + 0x150582 (0x563fe05c7582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #63: PyObject_Call + 0xbc (0x563fe05c7f1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:. This may indicate a possible application crash on rank 0 or a network set up issue.
[default5]:Traceback (most recent call last):
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default5]: trainer.train(dataloader)
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 430, in train
[default5]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 459, in training_step
[default5]: outputs = self.pipeline_engine.train_batch_iter(
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 187, in train_batch_iter
[default5]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default5]: output = model(**micro_batch)
[default5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default5]: return self._call_impl(*args, **kwargs)
[default5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default5]: return forward_call(*args, **kwargs)
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 890, in forward
[default5]: sharded_logits = self.model(
[default5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default5]: return self._call_impl(*args, **kwargs)
[default5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default5]: return forward_call(*args, **kwargs)
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default5]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default5]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default5]: return self._call_impl(*args, **kwargs)
[default5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default5]: return forward_call(*args, **kwargs)
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward
[default5]: new_kwargs[name] = recv_from_pipeline_state_buffer(
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer
[default5]: pipeline_state.run_communication()
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication
[default5]: recv_activation_tensor = recv_activation()
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__
[default5]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0]
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors
[default5]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag)
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors
[default5]: meta = self._recv_meta(from_rank=from_rank, tag=tag)
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 246, in _recv_meta
[default5]: dist.recv(
[default5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 72, in wrapper
[default5]: return func(*args, **kwargs)
[default5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1706, in recv
[default5]: pg.recv([tensor], group_src_rank, tag).wait()
[default5]:torch.distributed.DistBackendError: [5] is setting up NCCL communicator and retrieving ncclUniqueId from [0] via c10d key-value store by key '4:5', but store->get('4:5') got error: Connection reset by peer
[default5]:Exception raised from recvBytes at ../torch/csrc/distributed/c10d/Utils.hpp:670 (most recent call first):
[default5]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f4465b29d87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default5]:frame #1: <unknown function> + 0x589518e (0x7f449dae318e in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #2: c10d::TCPStore::doWait(c10::ArrayRef<std::string>, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x360 (0x7f449dadd9a0 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #3: c10d::TCPStore::doGet(std::string const&) + 0x32 (0x7f449daddce2 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #4: c10d::TCPStore::get(std::string const&) + 0xa1 (0x7f449dadeb11 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #5: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f449da93f81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #6: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f449da93f81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #7: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f449da93f81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #8: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f449da93f81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #9: c10d::ProcessGroupNCCL::broadcastUniqueNCCLID(ncclUniqueId*, bool, std::string const&, int) + 0xa9 (0x7f4466cd1c69 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #10: c10d::ProcessGroupNCCL::getNCCLComm(std::string const&, std::vector<c10::Device, std::allocator<c10::Device> > const&, c10d::OpType, int, bool) + 0x22b (0x7f4466cd8c5b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #11: c10d::ProcessGroupNCCL::recv(std::vector<at::Tensor, std::allocator<at::Tensor> >&, int, int) + 0x550 (0x7f4466cfbb60 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #12: <unknown function> + 0x5838439 (0x7f449da86439 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #13: <unknown function> + 0x5843330 (0x7f449da91330 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #14: <unknown function> + 0x58433c5 (0x7f449da913c5 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #15: <unknown function> + 0x4e893cc (0x7f449d0d73cc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #16: <unknown function> + 0x1a08a88 (0x7f4499c56a88 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #17: <unknown function> + 0x5849a84 (0x7f449da97a84 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #18: <unknown function> + 0x584ed35 (0x7f449da9cd35 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #19: <unknown function> + 0xc97eee (0x7f44b034eeee in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
[default5]:frame #20: <unknown function> + 0x413ea4 (0x7f44afacaea4 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
[default5]:frame #21: <unknown function> + 0x1445a6 (0x556b250985a6 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #22: _PyObject_MakeTpCall + 0x26b (0x556b25091a6b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #23: <unknown function> + 0x150866 (0x556b250a4866 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #24: _PyEval_EvalFrameDefault + 0x4c12 (0x556b2508d142 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #25: _PyFunction_Vectorcall + 0x6c (0x556b25098a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #26: PyObject_Call + 0xbc (0x556b250a4f1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #27: _PyEval_EvalFrameDefault + 0x2d83 (0x556b2508b2b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #28: _PyFunction_Vectorcall + 0x6c (0x556b25098a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #29: _PyEval_EvalFrameDefault + 0x13ca (0x556b250898fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #30: <unknown function> + 0x150582 (0x556b250a4582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #31: _PyEval_EvalFrameDefault + 0x13ca (0x556b250898fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #32: <unknown function> + 0x150582 (0x556b250a4582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #33: _PyEval_EvalFrameDefault + 0x13ca (0x556b250898fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #34: <unknown function> + 0x150582 (0x556b250a4582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #35: _PyEval_EvalFrameDefault + 0x13ca (0x556b250898fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #36: _PyObject_FastCallDictTstate + 0xd0 (0x556b25090f50 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #37: _PyObject_Call_Prepend + 0x69 (0x556b250a2c39 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #38: <unknown function> + 0x211239 (0x556b25165239 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #39: _PyObject_MakeTpCall + 0x26b (0x556b25091a6b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #40: _PyEval_EvalFrameDefault + 0x4eb6 (0x556b2508d3e6 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #41: _PyFunction_Vectorcall + 0x6c (0x556b25098a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #42: _PyEval_EvalFrameDefault + 0x72c (0x556b25088c5c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #43: _PyFunction_Vectorcall + 0x6c (0x556b25098a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #44: _PyEval_EvalFrameDefault + 0x13ca (0x556b250898fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #45: <unknown function> + 0x150582 (0x556b250a4582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #46: PyObject_Call + 0xbc (0x556b250a4f1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #47: _PyEval_EvalFrameDefault + 0x2d83 (0x556b2508b2b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #48: <unknown function> + 0x150582 (0x556b250a4582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #49: PyObject_Call + 0xbc (0x556b250a4f1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #50: _PyEval_EvalFrameDefault + 0x2d83 (0x556b2508b2b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #51: _PyFunction_Vectorcall + 0x6c (0x556b25098a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #52: _PyObject_FastCallDictTstate + 0x187 (0x556b25091007 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #53: _PyObject_Call_Prepend + 0x69 (0x556b250a2c39 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #54: <unknown function> + 0x211239 (0x556b25165239 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #55: PyObject_Call + 0x207 (0x556b250a5067 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #56: _PyEval_EvalFrameDefault + 0x2d83 (0x556b2508b2b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #57: <unknown function> + 0x150582 (0x556b250a4582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #58: _PyEval_EvalFrameDefault + 0x13ca (0x556b250898fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #59: <unknown function> + 0x150582 (0x556b250a4582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #60: PyObject_Call + 0xbc (0x556b250a4f1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #61: _PyEval_EvalFrameDefault + 0x2d83 (0x556b2508b2b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #62: <unknown function> + 0x150582 (0x556b250a4582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #63: PyObject_Call + 0xbc (0x556b250a4f1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:. This may indicate a possible application crash on rank 0 or a network set up issue.
[default4]:Traceback (most recent call last):
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default4]: trainer.train(dataloader)
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 430, in train
[default4]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 459, in training_step
[default4]: outputs = self.pipeline_engine.train_batch_iter(
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 187, in train_batch_iter
[default4]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default4]: output = model(**micro_batch)
[default4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default4]: return self._call_impl(*args, **kwargs)
[default4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default4]: return forward_call(*args, **kwargs)
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 890, in forward
[default4]: sharded_logits = self.model(
[default4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default4]: return self._call_impl(*args, **kwargs)
[default4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default4]: return forward_call(*args, **kwargs)
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default4]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default4]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default4]: return self._call_impl(*args, **kwargs)
[default4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default4]: return forward_call(*args, **kwargs)
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward
[default4]: new_kwargs[name] = recv_from_pipeline_state_buffer(
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer
[default4]: pipeline_state.run_communication()
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication
[default4]: recv_activation_tensor = recv_activation()
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__
[default4]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0]
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors
[default4]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag)
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors
[default4]: meta = self._recv_meta(from_rank=from_rank, tag=tag)
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 246, in _recv_meta
[default4]: dist.recv(
[default4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 72, in wrapper
[default4]: return func(*args, **kwargs)
[default4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1706, in recv
[default4]: pg.recv([tensor], group_src_rank, tag).wait()
[default4]:torch.distributed.DistBackendError: [4] is setting up NCCL communicator and retrieving ncclUniqueId from [0] via c10d key-value store by key '3:4', but store->get('3:4') got error: Connection reset by peer
[default4]:Exception raised from recvBytes at ../torch/csrc/distributed/c10d/Utils.hpp:670 (most recent call first):
[default4]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f2137015d87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default4]:frame #1: <unknown function> + 0x589518e (0x7f216efcf18e in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #2: c10d::TCPStore::doWait(c10::ArrayRef<std::string>, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x360 (0x7f216efc99a0 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #3: c10d::TCPStore::doGet(std::string const&) + 0x32 (0x7f216efc9ce2 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #4: c10d::TCPStore::get(std::string const&) + 0xa1 (0x7f216efcab11 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #5: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f216ef7ff81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #6: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f216ef7ff81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #7: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f216ef7ff81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #8: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f216ef7ff81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #9: c10d::ProcessGroupNCCL::broadcastUniqueNCCLID(ncclUniqueId*, bool, std::string const&, int) + 0xa9 (0x7f21381bdc69 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #10: c10d::ProcessGroupNCCL::getNCCLComm(std::string const&, std::vector<c10::Device, std::allocator<c10::Device> > const&, c10d::OpType, int, bool) + 0x22b (0x7f21381c4c5b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #11: c10d::ProcessGroupNCCL::recv(std::vector<at::Tensor, std::allocator<at::Tensor> >&, int, int) + 0x550 (0x7f21381e7b60 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #12: <unknown function> + 0x5838439 (0x7f216ef72439 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #13: <unknown function> + 0x5843330 (0x7f216ef7d330 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #14: <unknown function> + 0x58433c5 (0x7f216ef7d3c5 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #15: <unknown function> + 0x4e893cc (0x7f216e5c33cc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #16: <unknown function> + 0x1a08a88 (0x7f216b142a88 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #17: <unknown function> + 0x5849a84 (0x7f216ef83a84 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #18: <unknown function> + 0x584ed35 (0x7f216ef88d35 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #19: <unknown function> + 0xc97eee (0x7f218183aeee in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
[default4]:frame #20: <unknown function> + 0x413ea4 (0x7f2180fb6ea4 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
[default4]:frame #21: <unknown function> + 0x1445a6 (0x55b327da15a6 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #22: _PyObject_MakeTpCall + 0x26b (0x55b327d9aa6b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #23: <unknown function> + 0x150866 (0x55b327dad866 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #24: _PyEval_EvalFrameDefault + 0x4c12 (0x55b327d96142 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #25: _PyFunction_Vectorcall + 0x6c (0x55b327da1a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #26: PyObject_Call + 0xbc (0x55b327dadf1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #27: _PyEval_EvalFrameDefault + 0x2d83 (0x55b327d942b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #28: _PyFunction_Vectorcall + 0x6c (0x55b327da1a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #29: _PyEval_EvalFrameDefault + 0x13ca (0x55b327d928fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #30: <unknown function> + 0x150582 (0x55b327dad582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #31: _PyEval_EvalFrameDefault + 0x13ca (0x55b327d928fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #32: <unknown function> + 0x150582 (0x55b327dad582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #33: _PyEval_EvalFrameDefault + 0x13ca (0x55b327d928fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #34: <unknown function> + 0x150582 (0x55b327dad582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #35: _PyEval_EvalFrameDefault + 0x13ca (0x55b327d928fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #36: _PyObject_FastCallDictTstate + 0xd0 (0x55b327d99f50 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #37: _PyObject_Call_Prepend + 0x69 (0x55b327dabc39 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #38: <unknown function> + 0x211239 (0x55b327e6e239 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #39: _PyObject_MakeTpCall + 0x26b (0x55b327d9aa6b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #40: _PyEval_EvalFrameDefault + 0x4eb6 (0x55b327d963e6 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #41: _PyFunction_Vectorcall + 0x6c (0x55b327da1a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #42: _PyEval_EvalFrameDefault + 0x72c (0x55b327d91c5c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #43: _PyFunction_Vectorcall + 0x6c (0x55b327da1a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #44: _PyEval_EvalFrameDefault + 0x13ca (0x55b327d928fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #45: <unknown function> + 0x150582 (0x55b327dad582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #46: PyObject_Call + 0xbc (0x55b327dadf1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #47: _PyEval_EvalFrameDefault + 0x2d83 (0x55b327d942b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #48: <unknown function> + 0x150582 (0x55b327dad582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #49: PyObject_Call + 0xbc (0x55b327dadf1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #50: _PyEval_EvalFrameDefault + 0x2d83 (0x55b327d942b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #51: _PyFunction_Vectorcall + 0x6c (0x55b327da1a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #52: _PyObject_FastCallDictTstate + 0x187 (0x55b327d9a007 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #53: _PyObject_Call_Prepend + 0x69 (0x55b327dabc39 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #54: <unknown function> + 0x211239 (0x55b327e6e239 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #55: PyObject_Call + 0x207 (0x55b327dae067 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #56: _PyEval_EvalFrameDefault + 0x2d83 (0x55b327d942b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #57: <unknown function> + 0x150582 (0x55b327dad582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #58: _PyEval_EvalFrameDefault + 0x13ca (0x55b327d928fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #59: <unknown function> + 0x150582 (0x55b327dad582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #60: PyObject_Call + 0xbc (0x55b327dadf1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #61: _PyEval_EvalFrameDefault + 0x2d83 (0x55b327d942b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #62: <unknown function> + 0x150582 (0x55b327dad582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #63: PyObject_Call + 0xbc (0x55b327dadf1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:. This may indicate a possible application crash on rank 0 or a network set up issue.
[default7]:Traceback (most recent call last):
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default7]: trainer.train(dataloader)
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 430, in train
[default7]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 459, in training_step
[default7]: outputs = self.pipeline_engine.train_batch_iter(
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 187, in train_batch_iter
[default7]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default7]: output = model(**micro_batch)
[default7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default7]: return self._call_impl(*args, **kwargs)
[default7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default7]: return forward_call(*args, **kwargs)
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 890, in forward
[default7]: sharded_logits = self.model(
[default7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default7]: return self._call_impl(*args, **kwargs)
[default7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default7]: return forward_call(*args, **kwargs)
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default7]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default7]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default7]: return self._call_impl(*args, **kwargs)
[default7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default7]: return forward_call(*args, **kwargs)
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward
[default7]: new_kwargs[name] = recv_from_pipeline_state_buffer(
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer
[default7]: pipeline_state.run_communication()
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication
[default7]: recv_activation_tensor = recv_activation()
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__
[default7]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0]
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors
[default7]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag)
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors
[default7]: meta = self._recv_meta(from_rank=from_rank, tag=tag)
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 246, in _recv_meta
[default7]: dist.recv(
[default7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 72, in wrapper
[default7]: return func(*args, **kwargs)
[default7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1706, in recv
[default7]: pg.recv([tensor], group_src_rank, tag).wait()
[default7]:torch.distributed.DistBackendError: [7] is setting up NCCL communicator and retrieving ncclUniqueId from [0] via c10d key-value store by key '6:7', but store->get('6:7') got error: Connection reset by peer
[default7]:Exception raised from recvBytes at ../torch/csrc/distributed/c10d/Utils.hpp:670 (most recent call first):
[default7]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f500d69bd87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default7]:frame #1: <unknown function> + 0x589518e (0x7f504565518e in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #2: c10d::TCPStore::doWait(c10::ArrayRef<std::string>, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x360 (0x7f504564f9a0 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #3: c10d::TCPStore::doGet(std::string const&) + 0x32 (0x7f504564fce2 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #4: c10d::TCPStore::get(std::string const&) + 0xa1 (0x7f5045650b11 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #5: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f5045605f81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #6: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f5045605f81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #7: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f5045605f81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #8: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f5045605f81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #9: c10d::ProcessGroupNCCL::broadcastUniqueNCCLID(ncclUniqueId*, bool, std::string const&, int) + 0xa9 (0x7f500e843c69 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #10: c10d::ProcessGroupNCCL::getNCCLComm(std::string const&, std::vector<c10::Device, std::allocator<c10::Device> > const&, c10d::OpType, int, bool) + 0x22b (0x7f500e84ac5b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #11: c10d::ProcessGroupNCCL::recv(std::vector<at::Tensor, std::allocator<at::Tensor> >&, int, int) + 0x550 (0x7f500e86db60 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #12: <unknown function> + 0x5838439 (0x7f50455f8439 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #13: <unknown function> + 0x5843330 (0x7f5045603330 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #14: <unknown function> + 0x58433c5 (0x7f50456033c5 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #15: <unknown function> + 0x4e893cc (0x7f5044c493cc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #16: <unknown function> + 0x1a08a88 (0x7f50417c8a88 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #17: <unknown function> + 0x5849a84 (0x7f5045609a84 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #18: <unknown function> + 0x584ed35 (0x7f504560ed35 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #19: <unknown function> + 0xc97eee (0x7f5057ec0eee in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
[default7]:frame #20: <unknown function> + 0x413ea4 (0x7f505763cea4 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
[default7]:frame #21: <unknown function> + 0x1445a6 (0x558474c7e5a6 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #22: _PyObject_MakeTpCall + 0x26b (0x558474c77a6b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #23: <unknown function> + 0x150866 (0x558474c8a866 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #24: _PyEval_EvalFrameDefault + 0x4c12 (0x558474c73142 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #25: _PyFunction_Vectorcall + 0x6c (0x558474c7ea2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #26: PyObject_Call + 0xbc (0x558474c8af1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #27: _PyEval_EvalFrameDefault + 0x2d83 (0x558474c712b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #28: _PyFunction_Vectorcall + 0x6c (0x558474c7ea2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #29: _PyEval_EvalFrameDefault + 0x13ca (0x558474c6f8fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #30: <unknown function> + 0x150582 (0x558474c8a582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #31: _PyEval_EvalFrameDefault + 0x13ca (0x558474c6f8fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #32: <unknown function> + 0x150582 (0x558474c8a582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #33: _PyEval_EvalFrameDefault + 0x13ca (0x558474c6f8fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #34: <unknown function> + 0x150582 (0x558474c8a582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #35: _PyEval_EvalFrameDefault + 0x13ca (0x558474c6f8fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #36: _PyObject_FastCallDictTstate + 0xd0 (0x558474c76f50 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #37: _PyObject_Call_Prepend + 0x69 (0x558474c88c39 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #38: <unknown function> + 0x211239 (0x558474d4b239 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #39: _PyObject_MakeTpCall + 0x26b (0x558474c77a6b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #40: _PyEval_EvalFrameDefault + 0x4eb6 (0x558474c733e6 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #41: _PyFunction_Vectorcall + 0x6c (0x558474c7ea2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #42: _PyEval_EvalFrameDefault + 0x72c (0x558474c6ec5c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #43: _PyFunction_Vectorcall + 0x6c (0x558474c7ea2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #44: _PyEval_EvalFrameDefault + 0x13ca (0x558474c6f8fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #45: <unknown function> + 0x150582 (0x558474c8a582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #46: PyObject_Call + 0xbc (0x558474c8af1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #47: _PyEval_EvalFrameDefault + 0x2d83 (0x558474c712b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #48: <unknown function> + 0x150582 (0x558474c8a582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #49: PyObject_Call + 0xbc (0x558474c8af1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #50: _PyEval_EvalFrameDefault + 0x2d83 (0x558474c712b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #51: _PyFunction_Vectorcall + 0x6c (0x558474c7ea2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #52: _PyObject_FastCallDictTstate + 0x187 (0x558474c77007 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #53: _PyObject_Call_Prepend + 0x69 (0x558474c88c39 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #54: <unknown function> + 0x211239 (0x558474d4b239 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #55: PyObject_Call + 0x207 (0x558474c8b067 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #56: _PyEval_EvalFrameDefault + 0x2d83 (0x558474c712b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #57: <unknown function> + 0x150582 (0x558474c8a582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #58: _PyEval_EvalFrameDefault + 0x13ca (0x558474c6f8fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #59: <unknown function> + 0x150582 (0x558474c8a582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #60: PyObject_Call + 0xbc (0x558474c8af1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #61: _PyEval_EvalFrameDefault + 0x2d83 (0x558474c712b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #62: <unknown function> + 0x150582 (0x558474c8a582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #63: PyObject_Call + 0xbc (0x558474c8af1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:. This may indicate a possible application crash on rank 0 or a network set up issue.
[2024-07-06 09:36:42,262] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 628806 closing signal SIGTERM
[2024-07-06 09:36:42,263] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 628807 closing signal SIGTERM
[2024-07-06 09:36:42,263] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 628808 closing signal SIGTERM
[2024-07-06 09:36:42,264] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 288358 closing signal SIGTERM
[2024-07-06 09:36:42,264] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 288359 closing signal SIGTERM
[2024-07-06 09:36:42,265] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 288360 closing signal SIGTERM
[2024-07-06 09:36:42,265] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 288361 closing signal SIGTERM
[2024-07-06 09:36:42,266] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 288362 closing signal SIGTERM
[2024-07-06 09:36:42,266] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 288363 closing signal SIGTERM
[2024-07-06 09:36:42,267] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 288364 closing signal SIGTERM
[2024-07-06 09:36:43,799] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: -6) local_rank: 0 (pid: 628805) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 812, in main
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2024-07-06_09:36:42
host : ip-26-0-160-192.ec2.internal
rank : 4 (local_rank: 4)
exitcode : 1 (pid: 628809)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
time : 2024-07-06_09:36:42
host : ip-26-0-160-192.ec2.internal
rank : 5 (local_rank: 5)
exitcode : 1 (pid: 628810)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
time : 2024-07-06_09:36:42
host : ip-26-0-160-192.ec2.internal
rank : 6 (local_rank: 6)
exitcode : 1 (pid: 628811)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[4]:
time : 2024-07-06_09:36:42
host : ip-26-0-160-192.ec2.internal
rank : 7 (local_rank: 7)
exitcode : 1 (pid: 628812)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-07-06_09:36:42
host : ip-26-0-160-192.ec2.internal
rank : 0 (local_rank: 0)
exitcode : -6 (pid: 628805)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 628805
============================================================
[default0]:[rank56]:[E ProcessGroupNCCL.cpp:537] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default0]:[rank56]:[E ProcessGroupNCCL.cpp:543] To avoid data inconsistency, we are taking the entire process down.
[default0]:[rank56]:[E ProcessGroupNCCL.cpp:1182] [Rank 56] NCCL watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.19.3
[default0]:ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[default0]:Last error:
[default0]:socketProgress: Connection closed by remote peer ip-26-0-173-246.ec2.internal<34800>
[default0]:Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1436 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7feb0432bd87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::vector<std::shared_ptr<c10d::NCCLComm>, std::allocator<std::shared_ptr<c10d::NCCLComm> > > const&) + 0x2f3 (0x7feb054d2fa3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7feb054d327b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x17d (0x7feb054d6c1d in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7feb054d7839 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #5: <unknown function> + 0xd3e95 (0x7feb4f1dbe95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #6: <unknown function> + 0x8609 (0x7feb542e3609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #7: clone + 0x43 (0x7feb540ae353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default0]:terminate called after throwing an instance of 'c10::DistBackendError'
[default0]: what(): [Rank 56] NCCL watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.19.3
[default0]:ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[default0]:Last error:
[default0]:socketProgress: Connection closed by remote peer ip-26-0-173-246.ec2.internal<34800>
[default0]:Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1436 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7feb0432bd87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::vector<std::shared_ptr<c10d::NCCLComm>, std::allocator<std::shared_ptr<c10d::NCCLComm> > > const&) + 0x2f3 (0x7feb054d2fa3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7feb054d327b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x17d (0x7feb054d6c1d in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7feb054d7839 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #5: <unknown function> + 0xd3e95 (0x7feb4f1dbe95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #6: <unknown function> + 0x8609 (0x7feb542e3609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #7: clone + 0x43 (0x7feb540ae353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default0]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1186 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7feb0432bd87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: <unknown function> + 0xdf6b11 (0x7feb0522db11 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: <unknown function> + 0xd3e95 (0x7feb4f1dbe95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #3: <unknown function> + 0x8609 (0x7feb542e3609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #4: clone + 0x43 (0x7feb540ae353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
srun: error: ip-26-0-160-192: task 0: Exited with exit code 1
[2024-07-06 09:36:44,193] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: -6) local_rank: 0 (pid: 288357) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10
[2024-07-06 09:36:44,233] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [WARNING] The node 'ip-26-0-173-246.ec2.internal_288286_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError.
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 812, in main
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-07-06_09:36:42
host : ip-26-0-173-246.ec2.internal
rank : 48 (local_rank: 0)
exitcode : -6 (pid: 288357)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 288357
============================================================
srun: error: ip-26-0-173-246: task 6: Exited with exit code 1
[2024-07-06 09:36:46,244] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [WARNING] The node 'ip-26-0-174-36.ec2.internal_1709888_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousConnectionError.
[2024-07-06 09:36:47,150] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [WARNING] The node 'ip-26-0-165-59.ec2.internal_119760_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousConnectionError.
[2024-07-06 09:36:47,259] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 119830 closing signal SIGTERM
[2024-07-06 09:36:47,259] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 119831 closing signal SIGTERM
[2024-07-06 09:36:47,259] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 119832 closing signal SIGTERM
[2024-07-06 09:36:47,260] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 119833 closing signal SIGTERM
[2024-07-06 09:36:47,260] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 119834 closing signal SIGTERM
[2024-07-06 09:36:47,260] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 119835 closing signal SIGTERM
[2024-07-06 09:36:47,261] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 119836 closing signal SIGTERM
[2024-07-06 09:36:47,261] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 119837 closing signal SIGTERM
[2024-07-06 09:36:47,271] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1709959 closing signal SIGTERM
[2024-07-06 09:36:47,271] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1709960 closing signal SIGTERM
[2024-07-06 09:36:47,271] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1709961 closing signal SIGTERM
[2024-07-06 09:36:47,272] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1709962 closing signal SIGTERM
[2024-07-06 09:36:47,272] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1709963 closing signal SIGTERM
[2024-07-06 09:36:47,273] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1709964 closing signal SIGTERM
[2024-07-06 09:36:47,273] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1709965 closing signal SIGTERM
[2024-07-06 09:36:49,100] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: -6) local_rank: 0 (pid: 1709958) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10
[2024-07-06 09:36:49,134] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [WARNING] The node 'ip-26-0-174-36.ec2.internal_1709888_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError.
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 812, in main
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-07-06_09:36:47
host : ip-26-0-174-36.ec2.internal
rank : 56 (local_rank: 0)
exitcode : -6 (pid: 1709958)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 1709958
============================================================
srun: error: ip-26-0-174-36: task 7: Exited with exit code 1
[2024-07-06 09:36:49,611] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [WARNING] The node 'ip-26-0-165-59.ec2.internal_119760_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError.
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 113, in _call_store
return getattr(self._store, store_op)(*args, **kwargs)
torch.distributed.DistNetworkError: Broken pipe
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 812, in main
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
result = agent.run()
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/metrics/api.py", line 123, in wrapper
result = f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 727, in run
result = self._invoke_run(role)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 900, in _invoke_run
num_nodes_waiting = rdzv_handler.num_nodes_waiting()
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1083, in num_nodes_waiting
self._state_holder.sync()
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 409, in sync
get_response = self._backend.get_state()
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 73, in get_state
base64_state: bytes = self._call_store("get", self._key)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 115, in _call_store
raise RendezvousConnectionError(
torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details.
srun: error: ip-26-0-165-59: task 4: Exited with exit code 1
Consider using `hf_transfer` for faster uploads. This solution comes with some limitations. See https://huggingface.co/docs/huggingface_hub/hf_transfer for more details.