Detected kernel version 4.18.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher. 02/04/2024 13:00:25 - INFO - __main__ - Distributed environment: DistributedType.MULTI_GPU Backend: nccl Num processes: 8 Process index: 0 Local process index: 0 Device: cuda:0 Mixed precision type: no 02/04/2024 13:00:27 - INFO - __main__ - Distributed environment: DistributedType.MULTI_GPU Backend: nccl Num processes: 8 Process index: 6 Local process index: 6 Device: cuda:6 Mixed precision type: no 02/04/2024 13:00:27 - INFO - __main__ - Distributed environment: DistributedType.MULTI_GPU Backend: nccl Num processes: 8 Process index: 2 Local process index: 2 Device: cuda:2 Mixed precision type: no 02/04/2024 13:00:27 - INFO - __main__ - Distributed environment: DistributedType.MULTI_GPU Backend: nccl Num processes: 8 Process index: 4 Local process index: 4 Device: cuda:4 Mixed precision type: no 02/04/2024 13:00:27 - INFO - __main__ - Distributed environment: DistributedType.MULTI_GPU Backend: nccl Num processes: 8 Process index: 7 Local process index: 7 Device: cuda:7 Mixed precision type: no 02/04/2024 13:00:28 - INFO - __main__ - Distributed environment: DistributedType.MULTI_GPU Backend: nccl Num processes: 8 Process index: 5 Local process index: 5 Device: cuda:5 Mixed precision type: no 02/04/2024 13:00:28 - INFO - __main__ - Distributed environment: DistributedType.MULTI_GPU Backend: nccl Num processes: 8 Process index: 3 Local process index: 3 Device: cuda:3 Mixed precision type: no 02/04/2024 13:00:28 - INFO - __main__ - Distributed environment: DistributedType.MULTI_GPU Backend: nccl Num processes: 8 Process index: 1 Local process index: 1 Device: cuda:1 Mixed precision type: no /cmlscratch/azarei/controlnet_diffusers/NEW/diffusers/.venv/lib/python3.11/site-packages/transformers/models/t5/tokenization_t5.py:240: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5. For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`. - Be aware that you SHOULD NOT rely on t5-large automatically truncating your input to 512 when padding/encoding. - If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding. - To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value. warnings.warn( /cmlscratch/azarei/controlnet_diffusers/NEW/diffusers/.venv/lib/python3.11/site-packages/transformers/models/t5/tokenization_t5.py:240: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5. For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`. - Be aware that you SHOULD NOT rely on t5-large automatically truncating your input to 512 when padding/encoding. - If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding. - To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value. warnings.warn( /cmlscratch/azarei/controlnet_diffusers/NEW/diffusers/.venv/lib/python3.11/site-packages/transformers/models/t5/tokenization_t5.py:240: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5. For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`. - Be aware that you SHOULD NOT rely on t5-large automatically truncating your input to 512 when padding/encoding. - If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding. - To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value. warnings.warn( /cmlscratch/azarei/controlnet_diffusers/NEW/diffusers/.venv/lib/python3.11/site-packages/transformers/models/t5/tokenization_t5.py:240: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5. For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`. - Be aware that you SHOULD NOT rely on t5-large automatically truncating your input to 512 when padding/encoding. - If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding. - To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value. warnings.warn( /cmlscratch/azarei/controlnet_diffusers/NEW/diffusers/.venv/lib/python3.11/site-packages/transformers/models/t5/tokenization_t5.py:240: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5. For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`. - Be aware that you SHOULD NOT rely on t5-large automatically truncating your input to 512 when padding/encoding. - If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding. - To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value. warnings.warn( /cmlscratch/azarei/controlnet_diffusers/NEW/diffusers/.venv/lib/python3.11/site-packages/transformers/models/t5/tokenization_t5.py:240: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5. For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`. - Be aware that you SHOULD NOT rely on t5-large automatically truncating your input to 512 when padding/encoding. - If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding. - To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value. warnings.warn( /cmlscratch/azarei/controlnet_diffusers/NEW/diffusers/.venv/lib/python3.11/site-packages/transformers/models/t5/tokenization_t5.py:240: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5. For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`. - Be aware that you SHOULD NOT rely on t5-large automatically truncating your input to 512 when padding/encoding. - If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding. - To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value. warnings.warn( /cmlscratch/azarei/controlnet_diffusers/NEW/diffusers/.venv/lib/python3.11/site-packages/transformers/models/t5/tokenization_t5.py:240: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5. For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`. - Be aware that you SHOULD NOT rely on t5-large automatically truncating your input to 512 when padding/encoding. - If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding. - To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value. warnings.warn( You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thouroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors. {'clip_sample_range', 'timestep_spacing', 'thresholding', 'variance_type', 'dynamic_thresholding_ratio', 'sample_max_value'} was not found in config. Values will be initialized to default values. {'scaling_factor', 'force_upcast'} was not found in config. Values will be initialized to default values. {'upcast_attention', 'num_attention_heads', 'mid_block_only_cross_attention', 'cross_attention_norm', 'conv_out_kernel', 'encoder_hid_dim', 'addition_time_embed_dim', 'class_embed_type', 'time_embedding_dim', 'timestep_post_act', 'resnet_time_scale_shift', 'resnet_skip_time_act', 'attention_type', 'reverse_transformer_layers_per_block', 'conv_in_kernel', 'transformer_layers_per_block', 'projection_class_embeddings_input_dim', 'addition_embed_type_num_heads', 'mid_block_type', 'dropout', 'addition_embed_type', 'time_cond_proj_dim', 'encoder_hid_dim_type', 'time_embedding_type', 'class_embeddings_concat', 'resnet_out_scale_factor', 'time_embedding_act_fn'} was not found in config. Values will be initialized to default values. 02/04/2024 13:01:18 - INFO - __main__ - Initializing controlnet weights from unet wandb: Currently logged in as: armanzarei. Use `wandb login --relogin` to force relogin wandb: wandb version 0.16.2 is available! To upgrade, please run: wandb: $ pip install wandb --upgrade wandb: Tracking run with wandb version 0.16.1 wandb: Run data is saved locally in /cmlscratch/azarei/controlnet_diffusers/NEW/diffusers/examples/controlnet/wandb/run-20240204_130216-tmxxyw1d wandb: Run `wandb offline` to turn off syncing. wandb: Syncing run legendary-dust-1 wandb: ⭐️ View project at https://wandb.ai/armanzarei/only_t5_large_controlnet wandb: 🚀 View run at https://wandb.ai/armanzarei/only_t5_large_controlnet/runs/tmxxyw1d 02/04/2024 13:02:19 - INFO - __main__ - ***** Models & Arguments ***** 02/04/2024 13:02:19 - INFO - __main__ - T5 Text Encoder Model = t5-large (Output Dim = 1024) 02/04/2024 13:02:19 - INFO - __main__ - ***** Running training ***** 02/04/2024 13:02:19 - INFO - __main__ - Num examples = 566747 02/04/2024 13:02:19 - INFO - __main__ - Num batches each epoch = 17711 02/04/2024 13:02:19 - INFO - __main__ - Num Epochs = 1 02/04/2024 13:02:19 - INFO - __main__ - Instantaneous batch size per device = 4 02/04/2024 13:02:19 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 32 02/04/2024 13:02:19 - INFO - __main__ - Gradient Accumulation steps = 1 02/04/2024 13:02:19 - INFO - __main__ - Total optimization steps = 15001 Steps: 0%| | 0/15001 [00:00