Sygil-Diffusion / README.md
ZeroCool94's picture
Update README.md
737beff
|
raw
history blame
5.17 kB
metadata
license: openrail++
language:
  - en
widget:
  - text: a beautiful illustration of a fantasy forest
tags:
  - stable-diffusion
  - sygil-diffusion
  - text-to-image
  - sygil-devs
  - finetune
  - stable-diffusion-1.5
inference: true
pinned: true

About the model


This model is a fine-tune of Stable Diffusion v1.5, trained on the Imaginary Network Expanded Dataset, with the big advantage of allowing the use of multiple namespaces (labeled tags) to control various parts of the final generation. While current models usually are prone to “context errors” and need substantial negative prompting to set them on the right track, the use of namespaces in this model (eg. “species:seal” or “studio:dc”) stop the model from misinterpreting a seal as the singer Seal, or DC comics as Washington DC.

As the model is fine-tuned on a wide variety of content, it’s able to generate many types of images and compositions, and easily outperforms the original model when it comes to portraits, architecture, reflections, fantasy, concept art, and landscapes without being hyper-specialized like other community fine-tunes that are currently available.

**Note: The prompt engineering techniques needed are slightly different from other fine-tunes and the original SD 1.5, so while you can still use your favorite prompts, for best results you might need to tweak them to make use of namespaces. A more detailed guide will be available shortly, but the examples here and this Dataset Explorer should be able to start you off on the right track.

If you find our work useful, please consider supporting us on OpenCollective!

This model is still in its infancy, so feel free to give us feedback on our Discord Server or on the discussions section on huggingface. We plan to improve it with more, better tags in the future, so any help is always welcome 😛 Join the Discord Server

Showcase

Showcase image

Examples

Using the 🤗's Diffusers library to run Sygil Diffusion in a simple and efficient manner.

pip install diffusers transformers accelerate scipy safetensors

Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to DPMSolverMultistepScheduler):

import torch
from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler

model_id = "Sygil/Sygil-Diffusion"

# Use the DPMSolverMultistepScheduler (DPM-Solver++) scheduler here instead
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")

prompt = "a beautiful illustration of a fantasy forest"
image = pipe(prompt).images[0]
    
image.save("fantasy_forest_illustration.png")

Notes:

  • Despite not being a dependency, we highly recommend you to install xformers for memory efficient attention (better performance)
  • If you have low GPU RAM available, make sure to add a pipe.enable_attention_slicing() after sending it to cuda for less VRAM usage (to the cost of speed).

Available Checkpoints:

Training

Training Data The model was trained on the following dataset:

Hardware and others

  • Hardware: 1 x Nvidia RTX 3050 8GB GPU
  • Hours Trained: 496 hours approximately.
  • Optimizer: AdamW
  • Gradient Accumulations: 2
  • Batch: 1
  • Learning rate: warmup to 1e-7 for 10,000 steps and then kept constant
  • Total Training Steps: 1,344,635

Developed by: ZeroCool94 at Sygil-Dev

Community Contributions:

This model card is based on the Stable Diffusion v1 and DALL-E Mini model card.

License

This model is open access and available to all, with a CreativeML Open RAIL++-M License further specifying rights and usage. Please read the full license here