Edit model card

Core ML Converted Model:

  • This model was converted to Core ML for use on Apple Silicon devices. Conversion instructions can be found here.
  • Provide the model to an app such as Mochi Diffusion Github / Discord to generate images.
  • split_einsum version is compatible with all compute unit options including Neural Engine.
  • original version is only compatible with CPU & GPU option.
  • Different resolution versions are tagged accordingly.

SSD-1B_8bit:

Source: Segmind

SSD-1B

This is the original Segmind SSD-1B base model converted and quantized to 8-bits.

image

image

image

image

The SSD-1B model has been meticulously engineered with a strong focus on speed and efficiency. It delivers a remarkable 60% speed up in inference and fine-tuning compared to SDXL, rendering it an ideal choice for real-time applications and situations where rapid image generation is a critical requirement.

SSD-1B is 50 percent more compact compared to SDXL at the same bit depth, making it easier to deploy and utilize in various systems and platforms without sacrificing performance. It is a 1.3 bilion parameter model where several layers have been removed from the base SDXL model.

This model employs a knowledge distillation strategy, where it leverages the teachings of several expert models in succession, including SDXL 1.0, ZavyChromaXL, and JuggernautXL, to combine their strengths and produce impressive images. This model training also included data from a variety of datasets, including GRIT and Midjourney scrape data. A total of around 15 million data points (image-prompt pairs) were used during the training process. This diverse training data equips SSD-1B with enhanced capabilities to generate a wide spectrum of visual content based on textual prompts.

SSD-1B comes with strong generation abilities out of the box, but for the best performance on your specific task, we recommend fine-tuning the model on your private data. This process can be done in hours.

Downloads last month
0
Inference API
Unable to determine this model's library. Check the docs .