alfredplpl commited on
Commit
e105842
1 Parent(s): 830f94d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -1
README.md CHANGED
@@ -22,8 +22,17 @@ tags:
22
  6. Get the following image.
23
  ![example](example.jpg)
24
 
 
 
 
 
 
 
 
 
 
25
  ## How to make the LoRA Adapter
26
- I used sd-scripts. The parameters is as follows:
27
  ```bash
28
  accelerate launch --num_cpu_threads_per_process 1 flux_train_network.py --pretrained_model_name_or_path '/mnt/NVM/flux/flux1-dev.safetensors' --clip_l '/mnt/NVM/flux/clip_l.safetensors' --t5xxl '/mnt/NVM/flux/t5xxl_fp16.safetensors' --ae '/mnt/NVM/flux/ae.safetensors' --cache_latents --save_model_as safetensors --sdpa --persistent_data_loader_workers --max_data_loader_n_workers 2 --seed 42 --gradient_checkpointing --save_precision bf16 --network_module networks.lora_flux --network_dim 16 --network_alpha 16 --optimizer_type adamw8bit --learning_rate 1e-3 --network_train_unet_only --cache_text_encoder_outputs --cache_text_encoder_outputs --max_train_epochs 3 --save_every_n_epochs 1 --dataset_config flux_lora.toml --output_dir /mnt/NVM/flux --output_name flux_lora --timestep_sampling sigmoid --model_prediction_type raw --discrete_flow_shift 3.0 --guidance_scale 1.0 --loss_type l2 --mixed_precision bf16 --full_bf16 --max_bucket_reso 2048 --min_bucket_reso 512 --apply_t5_attn_mask --lr_scheduler cosine --lr_warmup_steps 10
29
  ```
 
22
  6. Get the following image.
23
  ![example](example.jpg)
24
 
25
+ ### Examples
26
+
27
+ **Please use ChatGPT or Claude to make a prompt!**
28
+
29
+ ![example1](example1.jpg)
30
+ ```
31
+ modern anime style, A close-up portrait of a young girl with green hair. Her hair is vibrant and shoulder-length, framing her face softly. She has large, expressive eyes that are slightly tilted upward, with a gentle and calm expression. Her facial features are delicate, with a small nose and soft lips. The background is simple, focusing attention on her face, with soft lighting that highlights her features. The overall style of the illustration is warm and inviting, with a soft color palette and a slightly dreamy atmosphere.
32
+ ```
33
+
34
  ## How to make the LoRA Adapter
35
+ I used [sd-scripts](https://github.com/kohya-ss/sd-scripts/tree/sd3) (the sd3 branch). The parameters is as follows:
36
  ```bash
37
  accelerate launch --num_cpu_threads_per_process 1 flux_train_network.py --pretrained_model_name_or_path '/mnt/NVM/flux/flux1-dev.safetensors' --clip_l '/mnt/NVM/flux/clip_l.safetensors' --t5xxl '/mnt/NVM/flux/t5xxl_fp16.safetensors' --ae '/mnt/NVM/flux/ae.safetensors' --cache_latents --save_model_as safetensors --sdpa --persistent_data_loader_workers --max_data_loader_n_workers 2 --seed 42 --gradient_checkpointing --save_precision bf16 --network_module networks.lora_flux --network_dim 16 --network_alpha 16 --optimizer_type adamw8bit --learning_rate 1e-3 --network_train_unet_only --cache_text_encoder_outputs --cache_text_encoder_outputs --max_train_epochs 3 --save_every_n_epochs 1 --dataset_config flux_lora.toml --output_dir /mnt/NVM/flux --output_name flux_lora --timestep_sampling sigmoid --model_prediction_type raw --discrete_flow_shift 3.0 --guidance_scale 1.0 --loss_type l2 --mixed_precision bf16 --full_bf16 --max_bucket_reso 2048 --min_bucket_reso 512 --apply_t5_attn_mask --lr_scheduler cosine --lr_warmup_steps 10
38
  ```