14:19:54-924440 INFO Starting SD-Trainer Mikazuki GUI... 14:19:54-927178 INFO Base directory: /root/lora-scripts, Working directory: /root/lora-scripts 14:19:54-927884 INFO Linux Python 3.10.9 /root/.conda/envs/lora/bin/python 14:19:54-933201 INFO Starting tageditor... 14:19:54-936010 INFO Starting tensorboard... 14:19:58-704848 INFO Server started at http://127.0.0.1:28000 TensorFlow installation not found - running with reduced feature set. NOTE: Using experimental fast data loading logic. To disable, pass "--load_fast=false" and report issues on GitHub. More details: https://github.com/tensorflow/tensorboard/issues/4784 TensorBoard 2.10.1 at http://127.0.0.1:6006/ (Press CTRL+C to quit) 14:20:52-995764 INFO Torch 2.3.0+cu121 14:20:53-470315 INFO Torch backend: nVidia CUDA 12.1 cuDNN 8902 14:20:53-876159 INFO Torch detected GPU: NVIDIA A100-SXM4-80GB VRAM 81051 Arch (8, 0) Cores 108 14:24:14-346637 INFO Training started with config file / 训练开始,使用配置文件: /root/lora-scripts/config/autosave/20240716-142414.toml 14:24:14-349477 INFO Task 6fd14190-b173-4715-b710-7293b373447e created The following values were not passed to `accelerate launch` and had defaults used instead: `--num_processes` was set to a value of `1` `--num_machines` was set to a value of `1` `--mixed_precision` was set to a value of `'no'` `--dynamo_backend` was set to a value of `'no'` To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`. 2024-07-16 14:24:58 INFO Loading settings from /root/lora-scripts/config/autosave/20240716-142414.toml... train_util.py:3744 INFO /root/lora-scripts/config/autosave/20240716-142414 train_util.py:3763 2024-07-16 14:24:58 INFO prepare tokenizer train_util.py:4227 2024-07-16 14:24:59 INFO update token length: 255 train_util.py:4244 2024-07-16 14:25:00 INFO prepare images. train_util.py:1572 INFO found directory /train6/1_dongman contains 916 image files train_util.py:1519 INFO 916 train images with repeating. train_util.py:1613 INFO 0 reg images. train_util.py:1616 WARNING no regularization images / 正則化画像が見つかりませんでした train_util.py:1621 INFO [Dataset 0] config_util.py:565 batch_size: 64 resolution: (1024, 1024) enable_bucket: True network_multiplier: 1.0 min_bucket_reso: 256 max_bucket_reso: 2048 bucket_reso_steps: 64 bucket_no_upscale: False [Subset 0 of Dataset 0] image_dir: "/train6/1_dongman" image_count: 916 num_repeats: 1 shuffle_caption: True keep_tokens: 0 keep_tokens_separator: secondary_separator: None enable_wildcard: False caption_dropout_rate: 0.0 caption_dropout_every_n_epoches: 0 caption_tag_dropout_rate: 0.0 caption_prefix: None caption_suffix: None color_aug: False flip_aug: False face_crop_aug_range: None random_crop: False token_warmup_min: 1, token_warmup_step: 0, is_reg: False class_tokens: dongman caption_extension: .txt INFO [Dataset 0] config_util.py:571 INFO loading image sizes. train_util.py:853 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 916/916 [00:00<00:00, 85199.42it/s] INFO make buckets train_util.py:859 INFO number of images (including repeats) / 各bucketの画像枚数(繰り返し回数を含む) train_util.py:905 INFO bucket 0: resolution (704, 1408), count: 29 train_util.py:910 INFO bucket 1: resolution (768, 1280), count: 6 train_util.py:910 INFO bucket 2: resolution (768, 1344), count: 723 train_util.py:910 INFO bucket 3: resolution (832, 1216), count: 123 train_util.py:910 INFO bucket 4: resolution (1216, 832), count: 2 train_util.py:910 INFO bucket 5: resolution (1344, 768), count: 33 train_util.py:910 INFO mean ar error (without repeats): 0.011380946831128346 train_util.py:915 INFO prepare accelerator train_db.py:106 wandb: Currently logged in as: cn42083120024 (renwu). Use `wandb login --relogin` to force relogin wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc accelerator device: cuda 2024-07-16 14:25:24 INFO loading model for process 0/1 train_util.py:4385 INFO load StableDiffusion checkpoint: ./sd-models/model.safetensors train_util.py:4341 2024-07-16 14:25:29 INFO UNet2DConditionModel: 64, 8, 768, False, False original_unet.py:1387 2024-07-16 14:25:56 INFO loading u-net: model_util.py:1009 2024-07-16 14:26:01 INFO loading vae: model_util.py:1017 2024-07-16 14:26:09 INFO loading text encoder: model_util.py:1074 INFO Enable xformers for U-Net train_util.py:2660 INFO [Dataset 0] train_util.py:2079 INFO caching latents. train_util.py:974 INFO checking cache validity... train_util.py:984 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 916/916 [00:00<00:00, 4252.76it/s] 2024-07-16 14:26:10 INFO caching latents... train_util.py:1021 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 916/916 [08:53<00:00, 1.72it/s] 2024-07-16 14:35:03 INFO CrossAttnDownBlock2D False -> True original_unet.py:1521 INFO CrossAttnDownBlock2D False -> True original_unet.py:1521 INFO CrossAttnDownBlock2D False -> True original_unet.py:1521 INFO DownBlock2D False -> True original_unet.py:1521 INFO UNetMidBlock2DCrossAttn False -> True original_unet.py:1521 INFO UpBlock2D False -> True original_unet.py:1521 INFO CrossAttnUpBlock2D False -> True original_unet.py:1521 INFO CrossAttnUpBlock2D False -> True original_unet.py:1521 INFO CrossAttnUpBlock2D False -> True original_unet.py:1521 prepare optimizer, data loader etc. 2024-07-16 14:35:04 INFO use 8-bit AdamW optimizer | {} train_util.py:3889 override steps. steps for 30 epochs is / 指定エポックまでのステップ数: 540 running training / 学習開始 num train images * repeats / 学習画像の数×繰り返し回数: 916 num reg images / 正則化画像の数: 0 num batches per epoch / 1epochのバッチ数: 18 num epochs / epoch数: 30 batch size per device / バッチサイズ: 64 total train batch size (with parallel & distributed & accumulation) / 総バッチサイズ(並列学習、勾配合計含む): 64 gradient ccumulation steps / 勾配を合計するステップ数 = 1 total optimization steps / 学習ステップ数: 540 steps: 0%| | 0/540 [00:00 True original_unet.py:1521 INFO CrossAttnDownBlock2D False -> True original_unet.py:1521 INFO CrossAttnDownBlock2D False -> True original_unet.py:1521 INFO DownBlock2D False -> True original_unet.py:1521 INFO UNetMidBlock2DCrossAttn False -> True original_unet.py:1521 INFO UpBlock2D False -> True original_unet.py:1521 INFO CrossAttnUpBlock2D False -> True original_unet.py:1521 INFO CrossAttnUpBlock2D False -> True original_unet.py:1521 INFO CrossAttnUpBlock2D False -> True original_unet.py:1521 prepare optimizer, data loader etc. 2024-07-16 14:35:04 INFO use 8-bit AdamW optimizer | {} train_util.py:3889 override steps. steps for 30 epochs is / 指定エポックまでのステップ数: 540 running training / 学習開始 num train images * repeats / 学習画像の数×繰り返し回数: 916 num reg images / 正則化画像の数: 0 num batches per epoch / 1epochのバッチ数: 18 num epochs / epoch数: 30 batch size per device / バッチサイズ: 64 total train batch size (with parallel & distributed & accumulation) / 総バッチサイズ(並列学習、勾配合計含む): 64 gradient ccumulation steps / 勾配を合計するステップ数 = 1 total optimization steps / 学習ステップ数: 540 steps: 0%| | 0/540 [00:00