--- license: other license_name: flux-1-dev-non-commercial-license license_link: LICENSE.md language: - en tags: - art --- # Modern Anime LoRA Adapter for FLUX.1 dev ![eyecatch](eyecatch.jpg) ## Usage - ComfyUI 1. Download [lora.safetensors](modern-anime-lora.safetensors). 2. Move the file to ComfyUI/models/loras. 3. Lunch ComfyUI. 4. Load [the workflow](anime-workflow.json). 5. Queue prompt. (trigger words: modern anime style,) 6. Get the following image. ![example](example.jpg) ## How to make the LoRA Adapter I used sd-scripts. The parameters is as follows: ```bash accelerate launch --num_cpu_threads_per_process 1 flux_train_network.py --pretrained_model_name_or_path '/mnt/NVM/flux/flux1-dev.safetensors' --clip_l '/mnt/NVM/flux/clip_l.safetensors' --t5xxl '/mnt/NVM/flux/t5xxl_fp16.safetensors' --ae '/mnt/NVM/flux/ae.safetensors' --cache_latents --save_model_as safetensors --sdpa --persistent_data_loader_workers --max_data_loader_n_workers 2 --seed 42 --gradient_checkpointing --save_precision bf16 --network_module networks.lora_flux --network_dim 16 --network_alpha 16 --optimizer_type adamw8bit --learning_rate 1e-3 --network_train_unet_only --cache_text_encoder_outputs --cache_text_encoder_outputs --max_train_epochs 3 --save_every_n_epochs 1 --dataset_config flux_lora.toml --output_dir /mnt/NVM/flux --output_name flux_lora --timestep_sampling sigmoid --model_prediction_type raw --discrete_flow_shift 3.0 --guidance_scale 1.0 --loss_type l2 --mixed_precision bf16 --full_bf16 --max_bucket_reso 2048 --min_bucket_reso 512 --apply_t5_attn_mask --lr_scheduler cosine --lr_warmup_steps 10 ``` ```toml [general] enable_bucket = true [[datasets]] resolution = 1024 batch_size = 4 [[datasets.subsets]] image_dir = '/mnt/NVM/flux_lora' metadata_file = 'flux_lora.json' ```