---
title: Core ML Models
emoji: 🐱
pinned: false
tags:
- coreml
- stable-diffusion
---
[↓↓↓ **Scroll down (or click here) to see models** ↓↓↓](https://huggingface.co/coreml#models)
# Core ML Models Repository
Thanks to Apple engineers, we can now run Stable Diffusion on Apple Silicon using Core ML!\
However, it is hard to find compatible models, and converting models isn't the easiest thing to do.\
By organizing Core ML models in one place, it will be easier to find them and for everyone to benefit.
## Conversion Flags
The models were converted using the following flags:\
`--convert-vae-decoder --convert-vae-encoder --convert-unet --convert-text-encoder --bundle-resources-for-swift-cli --attention-implementation {SPLIT_EINSUM or ORIGINAL}`
## Model version: `split_einsum` VS `original`
Depending on what compute unit you select, you will need to use the correct model version:
- `split_einsum` is compatible with all compute unit
- `original` is only compatible with CPU & GPU
## Usage
Once the chosen model has been downloaded, simply unzip it to use it.
## Contributing
We encourage you to have at least one model converted (that this community doesn't already have) under your account that you would be able to contribute before joining. This will help us see those who can actually contribute back to the community.\
We also encourage you to follow [models](#models-name) and [repo](#repo-name) naming schemes accordingly.
**Attention**: Apple introduced Image-to-image capabilities in the [ml-stable-diffusion 0.2.0 release](https://github.com/apple/ml-stable-diffusion/releases/tag/0.2.0). All the models that do not have a VAE encoder (hence, not will be able to use Image-to-image), will have a `no-i2i` suffix right after the model name.\
For example: `stable-diffusion-2-1_no-i2i_original`.\
From now on, only models with a VAE encoder will be accepted.
[Contact us on Discord if you are interested in helping out](https://discord.gg/x2kartzxGv).
## Models Name
Models have the following naming scheme:
1. Original model name
1. Model version (`split-einsum` or `original`)
1. Model size (only if different from `512x512`)
1. VAE name (only if different from the original VAE)
Each label is separated by an underscore `_`, and all capitalization from the original name is preserved.\
For example: `stable-diffusion-2-1_original_512x768_ema-vae`.
## Repo Name
Repos are named with the original diffusers Hugging Face / Civitai repo name prefixed by `coreml-`.\
For example: `coreml-stable-diffusion-2-1`.
## Repo README Contents
Copy this template and paste it as a header:
```
---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
---
# Core ML converted model
This model was converted to Core ML for use on Apple Silicon devices by following Apple's instructions [here](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).\
Provide the model to an app such as [Mochi Diffusion](https://github.com/godly-devotion/MochiDiffusion) to generate images.
`split_einsum` versions are compatible with all compute units.\
`original` versions are only compatible with `CPU & GPU`.
#
Sources: [Hugging Face]() - [CivitAI]()
```
Then copy the original model's README as the body.
## Repo Directory Structure
```
coreml-stable-diffusion-2-1
├── README.md
├── original
│ ├── 512x768
│ │ ├── stable-diffusion-2-1_original_512x768.zip
│ │ └── ...
│ ├── 768x512
│ │ ├── stable-diffusion-2-1_original_768x512.zip
│ │ └── ...
│ ├── stable-diffusion-2-1_original.zip
│ └── ...
└── split_einsum
├── stable-diffusion-2-1_split-einsum.zip
└── ...
```