--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ pipeline_tag: text-to-image base_model: - RedRayz/illumina-xl-1.0 tags: - stable-diffusion - stable-diffusion-xl new_version: RedRayz/abydos-xl-1.1 --- # Abydos-XL ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/630e2d981ef92d4e37a1694e/H9xpfWtV9jRET1JW70Enw.jpeg) Modified Illustrious-XL-v0.1 with Blue Archive style This is the next version of Millennium-IL-1.0, Improved quality, stability and detail rendering. You can find example images on [Civitai model page](https://civitai.com/models/832248) ## Prompt Guidelines Almost same as the base model ## Recommended Prompt None(Works good without `masterpiece, best quality`) ## Recommended Negative Prompt `worst quality, low quality, bad quality, lowres, jpeg artifacts, unfinished` ## Recommended Settings Steps: 14-28 Sampler: DPM++ 2M(dpmpp_2m) Scheduler: Simple Guidance Scale: 4-9 ### Hires.fix Upscaler: 4x-UltraSharp or Latent Denoising strength: 0.5(0.6 for latent) ## Training information Finetuned Illumina-XL-1.0 by repeating the training and merging a DoRA 3 times with sd-scripts. - Network module: lycoris_kohya(algo=lora, dora_wd=True) - Resolution: 1024(Bucketing enabled, min 512, max 2048) - Optimizer: Lion - Train U-Net only: Yes - LR Scheduler: cosine with restart(warmup steps=80-150, repeat=4-6) - Learning Rate: various(min=2e-05, max=6e-05) - Noise Offset: 0.04 - Immiscible Noise: 2048 - Batch size: 1 - Gradient Accumulation steps: 2 - Dim/Alpha: 16/4 - Conv Dim/Alpha: 1/0.1 ## Dataset information Dataset size: 289 ## Training scripts: [sd-scripts](https://github.com/kohya-ss/sd-scripts) ## Notice This model is licensed under [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) If you make modify this model, you must share both your changes and the original license. You are prohibited from monetizing any close-sourced fine-tuned / merged model, which disallows the public from accessing the model's source code / weights and its usages.