Sygil-Diffusion / README.md
ZeroCool94's picture
Update README.md
685ab11
metadata
license: openrail++
language:
  - en
  - ja
  - zh
tags:
  - stable-diffusion
  - sygil-diffusion
  - text-to-image
  - sygil-devs
  - finetune
  - stable-diffusion-1.5
inference: false
pipeline_tag: text-to-image

About the model


This model is a Stable Diffusion v1.5 fine-tune trained on the Imaginary Network Expanded Dataset. It is an advanced version of Stable Diffusion and can generate nearly all kinds of images like humans, reflections, cities, architecture, fantasy, concepts arts, anime, manga, digital arts, landscapes, or nature views. This model allows the user to have total control of the generation as they can use multiple tags and namespaces to control almost everything on the final result including image composition.

Note that the prompt engineering techniques is a bit different from other models and Stable Diffusion, while you can still use normal prompts like in other Stable Diffusion modelsin order to get the best out of this model you will need to make use of tags and namespaces. More about it here

If you find our work useful, please consider supporting us using one of the options below: - OpenCollective - Become a Patreon

Join our Discord Server for supports and announcements Join the Discord Server

Showcase

Showcase image

Training

Training Data The model was trained on the following dataset:

  • Imaginary Network Expanded Dataset dataset.

  • Hardware: 1 x Nvidia RTX 3050 8GB GPU

  • Optimizer: AdamW

  • Gradient Accumulations: 1

  • Batch: 1

  • Learning rate: warmup to 1e-7 for 10,000 steps and then kept constant

  • Total Training Steps: 800,0000

Developed by: Sygil-Dev

License

This model is open access and available to all, with a CreativeML Open RAIL++-M License further specifying rights and usage. Please read the full license here