|
======================== |
|
START TIME: Tue Jul 2 16:30:59 UTC 2024 |
|
python3 version = Python 3.10.14 |
|
======================== |
|
The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well. |
|
Token is valid (permission: write). |
|
Your token has been saved to /admin/home/ferdinand_mom/.cache/huggingface/token |
|
Login successful |
|
Already on 'bench_cluster' |
|
M examples/config_tiny_llama.py |
|
M examples/config_tiny_llama.yaml |
|
M examples/train_tiny_llama.sh |
|
M src/nanotron/models/llama.py |
|
M src/nanotron/trainer.py |
|
Your branch is up to date with 'origin/bench_cluster'. |
|
Job status: RUNNING |
|
W0702 16:31:01.518000 140017403332416 torch/distributed/run.py:757] |
|
W0702 16:31:01.518000 140017403332416 torch/distributed/run.py:757] ***************************************** |
|
W0702 16:31:01.518000 140017403332416 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. |
|
W0702 16:31:01.518000 140017403332416 torch/distributed/run.py:757] ***************************************** |
|
W0702 16:31:01.574000 139883342743360 torch/distributed/run.py:757] |
|
W0702 16:31:01.574000 139883342743360 torch/distributed/run.py:757] ***************************************** |
|
W0702 16:31:01.574000 139883342743360 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. |
|
W0702 16:31:01.574000 139883342743360 torch/distributed/run.py:757] ***************************************** |
|
[default0]:07/02/2024 16:31:20 [WARNING|DP=0|PP=0|TP=0|ip-26-0-169-239]: [Vocab Size Padding] Padded vocab (size: 50257) with 3 dummy tokens (new size: 50260) |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: Config: |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: Config(general=GeneralArgs(project='bench_cluster', |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: run='%date_%jobid', |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: seed=42, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: step=None, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: consumed_train_samples=None, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: benchmark_csv_path=None, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: ignore_sanity_checks=True), |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: parallelism=ParallelismArgs(dp=4, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: pp=1, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: tp=4, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: pp_engine=<nanotron.parallel.pipeline_parallel.engine.OneForwardOneBackwardPipelineEngine object at 0x7fb5af828910>, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: tp_mode=<TensorParallelLinearMode.REDUCE_SCATTER: 2>, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: tp_linear_async_communication=False, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: expert_parallel_size=1), |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: model=ModelArgs(model_config=LlamaConfig(bos_token_id=1, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: eos_token_id=2, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: hidden_act='silu', |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: hidden_size=2048, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: initializer_range=0.02, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: intermediate_size=4096, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: is_llama_config=True, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: max_position_embeddings=4096, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: num_attention_heads=32, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: num_hidden_layers=24, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: num_key_value_heads=32, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: pad_token_id=None, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: pretraining_tp=1, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: rms_norm_eps=1e-05, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: rope_scaling=None, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: rope_theta=10000.0, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: tie_word_embeddings=True, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: use_cache=True, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: vocab_size=50260), |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: init_method=RandomInit(std=0.025), |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: dtype=torch.bfloat16, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: make_vocab_size_divisible_by=1, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: ddp_bucket_cap_mb=25), |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: tokenizer=TokenizerArgs(tokenizer_name_or_path='openai-community/gpt2', |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: tokenizer_revision=None, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: tokenizer_max_length=None), |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: checkpoints=CheckpointsArgs(checkpoints_path=Path('/dev/null'), |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: checkpoint_interval=100000, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: save_initial_state=False, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: resume_checkpoint_path=None, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: checkpoints_path_is_shared_file_system=False), |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: logging=LoggingArgs(log_level='info', |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: log_level_replica='info', |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: iteration_step_info_interval=1), |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: tokens=TokensArgs(sequence_length=4096, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: train_steps=20, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: micro_batch_size=64, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: batch_accumulation_per_replica=4, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: val_check_interval=-1, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: limit_val_batches=0, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: limit_test_batches=0), |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: optimizer=OptimizerArgs(optimizer_factory=AdamWOptimizerArgs(adam_eps=1e-08, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: adam_beta1=0.9, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: adam_beta2=0.95, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: torch_adam_is_fused=True, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: name='adamW'), |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: zero_stage=1, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: weight_decay=0.01, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: clip_grad=1.0, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: accumulate_grad_in_fp32=True, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: learning_rate_scheduler=LRSchedulerArgs(learning_rate=0.0001, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: lr_warmup_steps=1, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: lr_warmup_style='linear', |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: lr_decay_style='linear', |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: lr_decay_steps=19, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: lr_decay_starting_step=None, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: min_decay_lr=1e-05)), |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: data_stages=[DatasetStageArgs(name='Training Stage', |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: start_training_step=1, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: data=DataArgs(dataset=PretrainDatasetsArgs(hf_dataset_or_datasets='roneneldan/TinyStories', |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: hf_dataset_splits='train', |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: hf_dataset_config_name=None, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: dataset_processing_num_proc_per_process=64, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: dataset_overwrite_cache=False, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: text_column_name='text'), |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: seed=42, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: num_loading_workers=32))], |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: profiler=ProfilerArgs(profiler_export_path=Path('/fsx/ferdinandmom/ferdinand-hf/bench_cluster/results/llama-1B/16_GPUS/dp-4_tp-4_pp-1_mbz-64')), |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: lighteval=None) |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: Model Config: |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: LlamaConfig(bos_token_id=1, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: eos_token_id=2, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: hidden_act='silu', |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: hidden_size=2048, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: initializer_range=0.02, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: intermediate_size=4096, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: is_llama_config=True, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: max_position_embeddings=4096, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: num_attention_heads=32, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: num_hidden_layers=24, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: num_key_value_heads=32, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: pad_token_id=None, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: pretraining_tp=1, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: rms_norm_eps=1e-05, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: rope_scaling=None, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: rope_theta=10000.0, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: tie_word_embeddings=True, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: use_cache=True, |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: vocab_size=50260) |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: Building model.. |
|
[default0]:07/02/2024 16:31:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: Setting PP block ranks... |
|
[default0]:07/02/2024 16:31:33 [INFO|DP=2|PP=0|TP=0|ip-26-0-169-247]: No checkpoint path provided. |
|
[default1]:07/02/2024 16:31:33 [INFO|DP=2|PP=0|TP=1|ip-26-0-169-247]: No checkpoint path provided. |
|
[default3]:07/02/2024 16:31:33 [INFO|DP=2|PP=0|TP=3|ip-26-0-169-247]: No checkpoint path provided. |
|
[default2]:07/02/2024 16:31:33 [INFO|DP=2|PP=0|TP=2|ip-26-0-169-247]: No checkpoint path provided. |
|
[default2]:07/02/2024 16:31:33 [INFO|DP=0|PP=0|TP=2|ip-26-0-169-239]: Local number of parameters: 277M (529.27MiB) |
|
[default2]:07/02/2024 16:31:33 [INFO|DP=0|PP=0|TP=2|ip-26-0-169-239]: [After model building] Memory usage: 554.21MiB. Peak allocated: 606.24MiB Peak reserved: 608.00MiB |
|
[default2]:07/02/2024 16:31:33 [INFO|DP=0|PP=0|TP=2|ip-26-0-169-239]: No checkpoint path provided. |
|
[default3]:07/02/2024 16:31:33 [INFO|DP=0|PP=0|TP=3|ip-26-0-169-239]: Local number of parameters: 277M (529.27MiB) |
|
[default3]:07/02/2024 16:31:33 [INFO|DP=0|PP=0|TP=3|ip-26-0-169-239]: [After model building] Memory usage: 554.21MiB. Peak allocated: 606.24MiB Peak reserved: 608.00MiB |
|
[default3]:07/02/2024 16:31:33 [INFO|DP=0|PP=0|TP=3|ip-26-0-169-239]: No checkpoint path provided. |
|
[default1]:07/02/2024 16:31:33 [INFO|DP=0|PP=0|TP=1|ip-26-0-169-239]: Local number of parameters: 277M (529.27MiB) |
|
[default1]:07/02/2024 16:31:33 [INFO|DP=0|PP=0|TP=1|ip-26-0-169-239]: [After model building] Memory usage: 554.21MiB. Peak allocated: 606.24MiB Peak reserved: 608.00MiB |
|
[default1]:07/02/2024 16:31:33 [INFO|DP=0|PP=0|TP=1|ip-26-0-169-239]: No checkpoint path provided. |
|
[default0]:07/02/2024 16:31:33 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: Total number of parameters: 1.11G (2117.09MiB) |
|
[default0]:07/02/2024 16:31:33 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: Local number of parameters: 277M (529.27MiB) |
|
[default0]:07/02/2024 16:31:33 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: [After model building] Memory usage: 554.21MiB. Peak allocated: 606.24MiB Peak reserved: 608.00MiB |
|
[default0]:07/02/2024 16:31:33 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: No checkpoint path provided. |
|
[default0]:07/02/2024 16:31:33 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: Parametrizing model parameters using StandardParametrizator |
|
[default6]:07/02/2024 16:31:33 [INFO|DP=1|PP=0|TP=2|ip-26-0-169-239]: No checkpoint path provided. |
|
[default4]:07/02/2024 16:31:33 [INFO|DP=1|PP=0|TP=0|ip-26-0-169-239]: No checkpoint path provided. |
|
[default5]:07/02/2024 16:31:33 [INFO|DP=1|PP=0|TP=1|ip-26-0-169-239]: No checkpoint path provided. |
|
[default6]:07/02/2024 16:31:33 [INFO|DP=3|PP=0|TP=2|ip-26-0-169-247]: No checkpoint path provided. |
|
[default7]:07/02/2024 16:31:33 [INFO|DP=1|PP=0|TP=3|ip-26-0-169-239]: No checkpoint path provided. |
|
[default5]:07/02/2024 16:31:33 [INFO|DP=3|PP=0|TP=1|ip-26-0-169-247]: No checkpoint path provided. |
|
[default7]:07/02/2024 16:31:33 [INFO|DP=3|PP=0|TP=3|ip-26-0-169-247]: No checkpoint path provided. |
|
[default4]:07/02/2024 16:31:33 [INFO|DP=3|PP=0|TP=0|ip-26-0-169-247]: No checkpoint path provided. |
|
[default0]:07/02/2024 16:31:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: [Optimizer Building] Using LearningRateForSP as learning rate |
|
[default0]:07/02/2024 16:31:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: [ZeRO sharding] Size of optimizer params per rank: |
|
[default0]:07/02/2024 16:31:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: [ZeRO sharding] DP Rank 0 has 69.4M out of 277M (25.00%) params' optimizer states |
|
[default0]:07/02/2024 16:31:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: [ZeRO sharding] DP Rank 1 has 69.4M out of 277M (25.00%) params' optimizer states |
|
[default0]:07/02/2024 16:31:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: [ZeRO sharding] DP Rank 2 has 69.4M out of 277M (25.00%) params' optimizer states |
|
[default0]:07/02/2024 16:31:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: [ZeRO sharding] DP Rank 3 has 69.4M out of 277M (25.00%) params' optimizer states |
|
[default0]:07/02/2024 16:31:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: [Training Plan] Stage Training Stage has 19 remaining training steps and has consumed 0 samples |
|
[default0]:07/02/2024 16:31:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: Using `datasets` library |
|
[default0]:07/02/2024 16:31:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: Loading tokenizer from openai-community/gpt2 and transformers/hf_hub versions ('4.41.2', '0.23.4') |
|
[default0]:07/02/2024 16:31:37 [WARNING|DP=0|PP=0|TP=0|ip-26-0-169-239]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:07/02/2024 16:31:38 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: [Training Plan] There are 1 training stages |
|
[default0]:07/02/2024 16:31:38 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: [Stage Training Stage] start from step 1 |
|
[default0]:07/02/2024 16:31:38 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: |
|
[default0]:07/02/2024 16:31:38 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: [Start training] datetime: 2024-07-02 16:31:38.751929 | mbs: 64 | grad_accum: 4 | global_batch_size: 1024 | sequence_length: 4096 | train_steps: 20 | start_iteration_step: 0 | consumed_train_samples: 0 |
|
[default0]:07/02/2024 16:31:38 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: Resuming training from stage Training Stage, it has trained for 0 samples and has 19 remaining train steps |
|
[default0]:07/02/2024 16:31:38 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-239]: Memory usage: 1877.40MiB. Peak allocated 1877.40MiB. Peak reserved: 1934.00MiB |
|
[default3]:07/02/2024 16:31:38 [WARNING|DP=0|PP=0|TP=3|ip-26-0-169-239]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:07/02/2024 16:31:38 [WARNING|DP=0|PP=0|TP=2|ip-26-0-169-239]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:07/02/2024 16:31:38 [WARNING|DP=0|PP=0|TP=1|ip-26-0-169-239]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:07/02/2024 16:31:38 [WARNING|DP=1|PP=0|TP=1|ip-26-0-169-239]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:07/02/2024 16:31:38 [WARNING|DP=1|PP=0|TP=0|ip-26-0-169-239]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:07/02/2024 16:31:38 [WARNING|DP=1|PP=0|TP=2|ip-26-0-169-239]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:07/02/2024 16:31:38 [WARNING|DP=2|PP=0|TP=0|ip-26-0-169-247]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:07/02/2024 16:31:38 [WARNING|DP=3|PP=0|TP=1|ip-26-0-169-247]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:07/02/2024 16:31:38 [WARNING|DP=3|PP=0|TP=2|ip-26-0-169-247]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:07/02/2024 16:31:38 [WARNING|DP=2|PP=0|TP=1|ip-26-0-169-247]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:07/02/2024 16:31:38 [WARNING|DP=2|PP=0|TP=3|ip-26-0-169-247]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:07/02/2024 16:31:38 [WARNING|DP=3|PP=0|TP=0|ip-26-0-169-247]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:07/02/2024 16:31:38 [WARNING|DP=2|PP=0|TP=2|ip-26-0-169-247]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:07/02/2024 16:31:39 [WARNING|DP=1|PP=0|TP=3|ip-26-0-169-239]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:07/02/2024 16:31:39 [WARNING|DP=3|PP=0|TP=3|ip-26-0-169-247]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:[rank10]: OSError: [Errno 122] Disk quota exceeded |
|
[default2]: |
|
[default2]:[rank10]: During handling of the above exception, another exception occurred: |
|
[default2]: |
|
[default2]:[rank10]: Traceback (most recent call last): |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default2]:[rank10]: trainer.train(dataloader) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default2]:[rank10]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default2]:[rank10]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default2]:[rank10]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default2]:[rank10]: output = model(**micro_batch) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank10]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank10]: return forward_call(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default2]:[rank10]: sharded_logits = self.model( |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank10]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank10]: return forward_call(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default2]:[rank10]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default2]:[rank10]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank10]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank10]: return forward_call(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default2]:[rank10]: output = self.pp_block(**new_kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank10]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank10]: return forward_call(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 629, in forward |
|
[default2]:[rank10]: hidden_states = self.input_layernorm(hidden_states) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank10]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank10]: return forward_call(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/nn/layer_norm.py", line 42, in forward |
|
[default2]:[rank10]: return layer_norm_fn( |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/ops/triton/layer_norm.py", line 875, in layer_norm_fn |
|
[default2]:[rank10]: return LayerNormFn.apply( |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/function.py", line 598, in apply |
|
[default2]:[rank10]: return super().apply(*args, **kwargs) # type: ignore[misc] |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/ops/triton/layer_norm.py", line 748, in forward |
|
[default2]:[rank10]: y, y1, mean, rstd, residual_out, seeds, dropout_mask, dropout_mask1 = _layer_norm_fwd( |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/ops/triton/layer_norm.py", line 335, in _layer_norm_fwd |
|
[default2]:[rank10]: _layer_norm_fwd_1pass_kernel[(M,)]( |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/jit.py", line 167, in <lambda> |
|
[default2]:[rank10]: return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 143, in run |
|
[default2]:[rank10]: timings = {config: self._bench(*args, config=config, **kwargs) for config in pruned_configs} |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 143, in <dictcomp> |
|
[default2]:[rank10]: timings = {config: self._bench(*args, config=config, **kwargs) for config in pruned_configs} |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 122, in _bench |
|
[default2]:[rank10]: return do_bench(kernel_call, warmup=self.warmup, rep=self.rep, quantiles=(0.5, 0.2, 0.8)) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/testing.py", line 102, in do_bench |
|
[default2]:[rank10]: fn() |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 110, in kernel_call |
|
[default2]:[rank10]: self.fn.run( |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 305, in run |
|
[default2]:[rank10]: return self.fn.run(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 305, in run |
|
[default2]:[rank10]: return self.fn.run(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 305, in run |
|
[default2]:[rank10]: return self.fn.run(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/jit.py", line 416, in run |
|
[default2]:[rank10]: self.cache[device][key] = compile( |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/compiler/compiler.py", line 194, in compile |
|
[default2]:[rank10]: metadata_group[f"{src.name}.{ext}"] = fn_cache_manager.put(next_module, f"{src.name}.{ext}") |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/cache.py", line 123, in put |
|
[default2]:[rank10]: with open(temp_path, mode) as f: |
|
[default2]:[rank10]: OSError: [Errno 122] Disk quota exceeded |
|
[default3]:[rank11]: OSError: [Errno 122] Disk quota exceeded |
|
[default3]: |
|
[default3]:[rank11]: During handling of the above exception, another exception occurred: |
|
[default3]: |
|
[default3]:[rank11]: Traceback (most recent call last): |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default3]:[rank11]: trainer.train(dataloader) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default3]:[rank11]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default3]:[rank11]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default3]:[rank11]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default3]:[rank11]: output = model(**micro_batch) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank11]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank11]: return forward_call(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default3]:[rank11]: sharded_logits = self.model( |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank11]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank11]: return forward_call(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default3]:[rank11]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default3]:[rank11]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank11]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank11]: return forward_call(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default3]:[rank11]: output = self.pp_block(**new_kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank11]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank11]: return forward_call(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 629, in forward |
|
[default3]:[rank11]: hidden_states = self.input_layernorm(hidden_states) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank11]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank11]: return forward_call(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/nn/layer_norm.py", line 42, in forward |
|
[default3]:[rank11]: return layer_norm_fn( |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/ops/triton/layer_norm.py", line 875, in layer_norm_fn |
|
[default3]:[rank11]: return LayerNormFn.apply( |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/function.py", line 598, in apply |
|
[default3]:[rank11]: return super().apply(*args, **kwargs) # type: ignore[misc] |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/ops/triton/layer_norm.py", line 748, in forward |
|
[default3]:[rank11]: y, y1, mean, rstd, residual_out, seeds, dropout_mask, dropout_mask1 = _layer_norm_fwd( |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/ops/triton/layer_norm.py", line 335, in _layer_norm_fwd |
|
[default3]:[rank11]: _layer_norm_fwd_1pass_kernel[(M,)]( |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/jit.py", line 167, in <lambda> |
|
[default3]:[rank11]: return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 143, in run |
|
[default3]:[rank11]: timings = {config: self._bench(*args, config=config, **kwargs) for config in pruned_configs} |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 143, in <dictcomp> |
|
[default3]:[rank11]: timings = {config: self._bench(*args, config=config, **kwargs) for config in pruned_configs} |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 122, in _bench |
|
[default3]:[rank11]: return do_bench(kernel_call, warmup=self.warmup, rep=self.rep, quantiles=(0.5, 0.2, 0.8)) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/testing.py", line 102, in do_bench |
|
[default3]:[rank11]: fn() |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 110, in kernel_call |
|
[default3]:[rank11]: self.fn.run( |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 305, in run |
|
[default3]:[rank11]: return self.fn.run(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 305, in run |
|
[default3]:[rank11]: return self.fn.run(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 305, in run |
|
[default3]:[rank11]: return self.fn.run(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/jit.py", line 416, in run |
|
[default3]:[rank11]: self.cache[device][key] = compile( |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/compiler/compiler.py", line 194, in compile |
|
[default3]:[rank11]: metadata_group[f"{src.name}.{ext}"] = fn_cache_manager.put(next_module, f"{src.name}.{ext}") |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/cache.py", line 123, in put |
|
[default3]:[rank11]: with open(temp_path, mode) as f: |
|
[default3]:[rank11]: OSError: [Errno 122] Disk quota exceeded |
|
[default7]:[rank7]: OSError: [Errno 122] Disk quota exceeded |
|
[default7]: |
|
[default7]:[rank7]: During handling of the above exception, another exception occurred: |
|
[default7]: |
|
[default7]:[rank7]: Traceback (most recent call last): |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default7]:[rank7]: trainer.train(dataloader) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default7]:[rank7]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default7]:[rank7]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default7]:[rank7]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default7]:[rank7]: output = model(**micro_batch) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank7]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank7]: return forward_call(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default7]:[rank7]: sharded_logits = self.model( |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank7]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank7]: return forward_call(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default7]:[rank7]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default7]:[rank7]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank7]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank7]: return forward_call(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default7]:[rank7]: output = self.pp_block(**new_kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank7]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank7]: return forward_call(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 631, in forward |
|
[default7]:[rank7]: output = self.attn(hidden_states=hidden_states, sequence_mask=sequence_mask) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank7]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank7]: return forward_call(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 566, in forward |
|
[default7]:[rank7]: query_states, key_value_states = self.flash_rotary_embedding(query_states, kv=key_value_states) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank7]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank7]: return forward_call(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/layers/rotary.py", line 457, in forward |
|
[default7]:[rank7]: q = apply_rotary_emb_func( |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/layers/rotary.py", line 122, in apply_rotary_emb |
|
[default7]:[rank7]: return ApplyRotaryEmb.apply( |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/function.py", line 598, in apply |
|
[default7]:[rank7]: return super().apply(*args, **kwargs) # type: ignore[misc] |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/layers/rotary.py", line 48, in forward |
|
[default7]:[rank7]: out = apply_rotary( |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/ops/triton/rotary.py", line 202, in apply_rotary |
|
[default7]:[rank7]: rotary_kernel[grid]( |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/jit.py", line 167, in <lambda> |
|
[default7]:[rank7]: return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/jit.py", line 416, in run |
|
[default7]:[rank7]: self.cache[device][key] = compile( |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/compiler/compiler.py", line 194, in compile |
|
[default7]:[rank7]: metadata_group[f"{src.name}.{ext}"] = fn_cache_manager.put(next_module, f"{src.name}.{ext}") |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/cache.py", line 123, in put |
|
[default7]:[rank7]: with open(temp_path, mode) as f: |
|
[default7]:[rank7]: OSError: [Errno 122] Disk quota exceeded |
|
[default1]:[rank1]: OSError: [Errno 122] Disk quota exceeded |
|
[default1]: |
|
[default1]:[rank1]: During handling of the above exception, another exception occurred: |
|
[default1]: |
|
[default1]:[rank1]: Traceback (most recent call last): |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default1]:[rank1]: trainer.train(dataloader) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default1]:[rank1]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default1]:[rank1]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default1]:[rank1]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default1]:[rank1]: output = model(**micro_batch) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank1]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank1]: return forward_call(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default1]:[rank1]: sharded_logits = self.model( |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank1]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank1]: return forward_call(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default1]:[rank1]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default1]:[rank1]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank1]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank1]: return forward_call(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default1]:[rank1]: output = self.pp_block(**new_kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank1]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank1]: return forward_call(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 631, in forward |
|
[default1]:[rank1]: output = self.attn(hidden_states=hidden_states, sequence_mask=sequence_mask) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank1]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank1]: return forward_call(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 566, in forward |
|
[default1]:[rank1]: query_states, key_value_states = self.flash_rotary_embedding(query_states, kv=key_value_states) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank1]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank1]: return forward_call(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/layers/rotary.py", line 457, in forward |
|
[default1]:[rank1]: q = apply_rotary_emb_func( |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/layers/rotary.py", line 122, in apply_rotary_emb |
|
[default1]:[rank1]: return ApplyRotaryEmb.apply( |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/function.py", line 598, in apply |
|
[default1]:[rank1]: return super().apply(*args, **kwargs) # type: ignore[misc] |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/layers/rotary.py", line 48, in forward |
|
[default1]:[rank1]: out = apply_rotary( |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/ops/triton/rotary.py", line 202, in apply_rotary |
|
[default1]:[rank1]: rotary_kernel[grid]( |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/jit.py", line 167, in <lambda> |
|
[default1]:[rank1]: return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/jit.py", line 416, in run |
|
[default1]:[rank1]: self.cache[device][key] = compile( |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/compiler/compiler.py", line 194, in compile |
|
[default1]:[rank1]: metadata_group[f"{src.name}.{ext}"] = fn_cache_manager.put(next_module, f"{src.name}.{ext}") |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/cache.py", line 123, in put |
|
[default1]:[rank1]: with open(temp_path, mode) as f: |
|
[default1]:[rank1]: OSError: [Errno 122] Disk quota exceeded |
|
[default6]:[rank14]: OSError: [Errno 122] Disk quota exceeded |
|
[default6]: |
|
[default6]:[rank14]: During handling of the above exception, another exception occurred: |
|
[default6]: |
|
[default6]:[rank14]: Traceback (most recent call last): |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default6]:[rank14]: trainer.train(dataloader) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default6]:[rank14]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default6]:[rank14]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default6]:[rank14]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default6]:[rank14]: output = model(**micro_batch) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank14]: return self._call_impl(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank14]: return forward_call(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default6]:[rank14]: sharded_logits = self.model( |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank14]: return self._call_impl(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank14]: return forward_call(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default6]:[rank14]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default6]:[rank14]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank14]: return self._call_impl(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank14]: return forward_call(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default6]:[rank14]: output = self.pp_block(**new_kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank14]: return self._call_impl(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank14]: return forward_call(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 631, in forward |
|
[default6]:[rank14]: output = self.attn(hidden_states=hidden_states, sequence_mask=sequence_mask) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank14]: return self._call_impl(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank14]: return forward_call(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 566, in forward |
|
[default6]:[rank14]: query_states, key_value_states = self.flash_rotary_embedding(query_states, kv=key_value_states) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank14]: return self._call_impl(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank14]: return forward_call(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/layers/rotary.py", line 457, in forward |
|
[default6]:[rank14]: q = apply_rotary_emb_func( |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/layers/rotary.py", line 122, in apply_rotary_emb |
|
[default6]:[rank14]: return ApplyRotaryEmb.apply( |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/function.py", line 598, in apply |
|
[default6]:[rank14]: return super().apply(*args, **kwargs) # type: ignore[misc] |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/layers/rotary.py", line 48, in forward |
|
[default6]:[rank14]: out = apply_rotary( |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/ops/triton/rotary.py", line 202, in apply_rotary |
|
[default6]:[rank14]: rotary_kernel[grid]( |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/jit.py", line 167, in <lambda> |
|
[default6]:[rank14]: return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/jit.py", line 416, in run |
|
[default6]:[rank14]: self.cache[device][key] = compile( |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/compiler/compiler.py", line 194, in compile |
|
[default6]:[rank14]: metadata_group[f"{src.name}.{ext}"] = fn_cache_manager.put(next_module, f"{src.name}.{ext}") |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/cache.py", line 123, in put |
|
[default6]:[rank14]: with open(temp_path, mode) as f: |
|
[default6]:[rank14]: OSError: [Errno 122] Disk quota exceeded |
|
[default5]:[rank13]: OSError: [Errno 122] Disk quota exceeded |
|
[default5]: |
|
[default5]:[rank13]: During handling of the above exception, another exception occurred: |
|
[default5]: |
|
[default5]:[rank13]: Traceback (most recent call last): |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default5]:[rank13]: trainer.train(dataloader) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default5]:[rank13]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default5]:[rank13]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default5]:[rank13]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default5]:[rank13]: output = model(**micro_batch) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank13]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank13]: return forward_call(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default5]:[rank13]: sharded_logits = self.model( |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank13]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank13]: return forward_call(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default5]:[rank13]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default5]:[rank13]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank13]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank13]: return forward_call(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default5]:[rank13]: output = self.pp_block(**new_kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank13]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank13]: return forward_call(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 631, in forward |
|
[default5]:[rank13]: output = self.attn(hidden_states=hidden_states, sequence_mask=sequence_mask) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank13]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank13]: return forward_call(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 566, in forward |
|
[default5]:[rank13]: query_states, key_value_states = self.flash_rotary_embedding(query_states, kv=key_value_states) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank13]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank13]: return forward_call(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/layers/rotary.py", line 457, in forward |
|
[default5]:[rank13]: q = apply_rotary_emb_func( |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/layers/rotary.py", line 122, in apply_rotary_emb |
|
[default5]:[rank13]: return ApplyRotaryEmb.apply( |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/function.py", line 598, in apply |
|
[default5]:[rank13]: return super().apply(*args, **kwargs) # type: ignore[misc] |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/layers/rotary.py", line 48, in forward |
|
[default5]:[rank13]: out = apply_rotary( |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/ops/triton/rotary.py", line 202, in apply_rotary |
|
[default5]:[rank13]: rotary_kernel[grid]( |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/jit.py", line 167, in <lambda> |
|
[default5]:[rank13]: return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/jit.py", line 416, in run |
|
[default5]:[rank13]: self.cache[device][key] = compile( |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/compiler/compiler.py", line 194, in compile |
|
[default5]:[rank13]: metadata_group[f"{src.name}.{ext}"] = fn_cache_manager.put(next_module, f"{src.name}.{ext}") |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/cache.py", line 123, in put |
|
[default5]:[rank13]: with open(temp_path, mode) as f: |
|
[default5]:[rank13]: OSError: [Errno 122] Disk quota exceeded |
|
W0702 16:31:48.682000 139883342743360 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 2408733 closing signal SIGTERM |
|
W0702 16:31:48.688000 139883342743360 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 2408735 closing signal SIGTERM |
|
W0702 16:31:48.691000 139883342743360 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 2408736 closing signal SIGTERM |
|
W0702 16:31:48.691000 140017403332416 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 80843 closing signal SIGTERM |
|
W0702 16:31:48.696000 139883342743360 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 2408737 closing signal SIGTERM |
|
W0702 16:31:48.694000 140017403332416 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 80844 closing signal SIGTERM |
|
W0702 16:31:48.698000 140017403332416 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 80847 closing signal SIGTERM |
|
W0702 16:31:48.703000 140017403332416 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 80850 closing signal SIGTERM |
|
W0702 16:31:48.739000 139883342743360 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 2408738 closing signal SIGTERM |
|
W0702 16:31:48.741000 139883342743360 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 2408739 closing signal SIGTERM |
|
E0702 16:31:50.829000 140017403332416 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 2 (pid: 80845) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10 |
|
Traceback (most recent call last): |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module> |
|
sys.exit(main()) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper |
|
return f(*args, **kwargs) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main |
|
run(args) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run |
|
elastic_launch( |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ |
|
return launch_agent(self._config, self._entrypoint, list(args)) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent |
|
raise ChildFailedError( |
|
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: |
|
============================================================ |
|
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED |
|
------------------------------------------------------------ |
|
Failures: |
|
[1]: |
|
time : 2024-07-02_16:31:48 |
|
host : ip-26-0-169-247.ec2.internal |
|
rank : 11 (local_rank: 3) |
|
exitcode : 1 (pid: 80846) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
[2]: |
|
time : 2024-07-02_16:31:48 |
|
host : ip-26-0-169-247.ec2.internal |
|
rank : 13 (local_rank: 5) |
|
exitcode : 1 (pid: 80848) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
[3]: |
|
time : 2024-07-02_16:31:48 |
|
host : ip-26-0-169-247.ec2.internal |
|
rank : 14 (local_rank: 6) |
|
exitcode : 1 (pid: 80849) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
------------------------------------------------------------ |
|
Root Cause (first observed failure): |
|
[0]: |
|
time : 2024-07-02_16:31:48 |
|
host : ip-26-0-169-247.ec2.internal |
|
rank : 10 (local_rank: 2) |
|
exitcode : 1 (pid: 80845) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
============================================================ |
|
srun: error: ip-26-0-169-247: task 1: Exited with exit code 1 |
|
E0702 16:31:51.239000 139883342743360 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 1 (pid: 2408734) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10 |
|
Traceback (most recent call last): |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module> |
|
sys.exit(main()) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper |
|
return f(*args, **kwargs) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main |
|
run(args) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run |
|
elastic_launch( |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ |
|
return launch_agent(self._config, self._entrypoint, list(args)) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent |
|
raise ChildFailedError( |
|
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: |
|
============================================================ |
|
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED |
|
------------------------------------------------------------ |
|
Failures: |
|
[1]: |
|
time : 2024-07-02_16:31:48 |
|
host : ip-26-0-169-239.ec2.internal |
|
rank : 7 (local_rank: 7) |
|
exitcode : 1 (pid: 2408740) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
------------------------------------------------------------ |
|
Root Cause (first observed failure): |
|
[0]: |
|
time : 2024-07-02_16:31:48 |
|
host : ip-26-0-169-239.ec2.internal |
|
rank : 1 (local_rank: 1) |
|
exitcode : 1 (pid: 2408734) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
============================================================ |
|
srun: error: ip-26-0-169-239: task 0: Exited with exit code 1 |
|
Consider using `hf_transfer` for faster uploads. This solution comes with some limitations. See https://huggingface.co/docs/huggingface_hub/hf_transfer for more details. |
|
|