README / README.md
jrrjrr's picture
Update README.md
7539a40
---
title: Core ML Models
emoji: 🐱
pinned: false
tags:
- coreml
- stable-diffusion
---
[↓↓↓ **Scroll down (or click here) to see models** ↓↓↓](https://huggingface.co/coreml-community#models)
# Core ML Models Repository
Thanks to Apple engineers, we can now run Stable Diffusion on Apple Silicon using Core ML!\
However, it is hard to find compatible models, and converting models isn't the easiest thing to do.\
By organizing Core ML models in one place, it will be easier to find them and for everyone to benefit.
<hr>
## Base Model Types
Model files with `split-einsum` in the file name are compatible with all compute units.
Model files with `original` in the file name are only compatible with CPU & GPU.
Model files with a `no-i2i` suffix in the file name only work for Text2Image.
Models with a `_cn` suffix in the file name (or in their repo name) will work for Text2Image, Image2Image and ControlNet.
Model files with neither a `no-i2i` nor a `_cn` suffix in the file name will work for Text2Image and Image2Image, but not ControlNet.
Models and files that include SDXL in their names are based on the recent SDXL-v1.0 Base and/or Refiner model.
**Other Model Types**
If you are using Mochi Diffusion v3.2, v4.0, or later versions, some model files with neither a `no-i2i` nor a `_cn` suffix in the file name might need a simple modification to enable Image2Image to work correctly. Please go [HERE](https://huggingface.co/coreml-community/VAEEncoders) for more information.
The various ControlNet types supported by Mochi Diffusion v4.0 and later are enabled by individual ControlNet models. Please go [HERE](https://huggingface.co/coreml-community/ControlNet-Models-For-Core-ML) for a selection of ControlNet models that you can download, and information on their use.
## Usage
Once the chosen base model has been downloaded, simply unzip it and put it in your model folder to use it.
## Conversion Flags
The models were converted using the following flags:\
`--convert-vae-decoder --convert-vae-encoder --convert-unet --unet-support-controlnet --convert-text-encoder --bundle-resources-for-swift-cli --attention-implementation {SPLIT_EINSUM or ORIGINAL}`
<hr>
## Contributing
We encourage you to have at least one model converted (that this community doesn't already have) under your account that you would be able to contribute before joining. This will help us see those who can actually contribute back to the community.\
We also encourage you to follow [models](#models-name) and [repo](#repo-name) naming schemes accordingly.
**Attention**: Apple introduced Image-to-image capabilities in the [ml-stable-diffusion 0.2.0 release](https://github.com/apple/ml-stable-diffusion/releases/tag/0.2.0). All the models that do not have a VAE encoder (hence, not will be able to use Image-to-image), will have a `no-i2i` suffix right after the model name.\
For example: `stable-diffusion-2-1_no-i2i_original`.\
From now on, only models with a VAE encoder will be accepted.
[Contact us on Discord if you are interested in helping out](https://discord.gg/x2kartzxGv).
## Models Name
Models have the following naming scheme:
1. Original model name
1. Model version (`split-einsum` or `original`)
1. Model size (only if different from `512x512`)
1. VAE name (only if different from the original `VAE`)
1. cn (for models compatible with `ControlNet`)
Each label is separated by an underscore `_`, and all capitalization from the original name is preserved.\
For example: `stable-diffusion-1-5_original_512x768_ema-vae_cn`.
## Repo Name
Repos are named with the original diffusers Hugging Face / Civitai repo name prefixed by `coreml-` and have a `_cn` suffix if they are ControlNet compatible.\
For example: `coreml-stable-diffusion-1-5_cn`.
## Repo README Contents
Copy this template and paste it as a header:
```
---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
- not-for-all-eyes (models where viewer discretion is advised)
---
# Core ML converted model
This model was converted to Core ML for use on Apple Silicon devices by following Apple's instructions [here](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).\
Provide the model to an app such as [Mochi Diffusion](https://github.com/godly-devotion/MochiDiffusion) to generate images.
`split_einsum` versions are compatible with all compute units.\
`original` versions are only compatible with `CPU & GPU`.
# <MODEL-NAME-HERE>
Sources: [Hugging Face]() - [CivitAI]()
```
Then copy the original model's README (without the tag section) as the body.
## Repo Directory Structure
```
coreml-stable-diffusion-2-1
β”œβ”€β”€ README.md
β”œβ”€β”€ original
β”‚ β”œβ”€β”€ 512x768
β”‚ β”‚ β”œβ”€β”€ stable-diffusion-2-1_original_512x768.zip
β”‚ β”‚ └── ...
β”‚ β”œβ”€β”€ 768x512
β”‚ β”‚ β”œβ”€β”€ stable-diffusion-2-1_original_768x512.zip
β”‚ β”‚ └── ...
β”‚ β”œβ”€β”€ stable-diffusion-2-1_original.zip
β”‚ └── ...
└── split_einsum
β”œβ”€β”€ stable-diffusion-2-1_split-einsum.zip
└── ...
```