| To convert a .ckpt or .safetensors SD-1.5 type model for use with ControlNet: | |
| Download this Python script and place it in the same folder as the model you want to convert: | |
| https://github.com/huggingface/diffusers/raw/main/scripts/convert_original_stable_diffusion_to_diffusers.py | |
| Activate the Conda environment with conda activate python_playground | |
| Navigate to the folder where the model and script are located. | |
| If your model is in CKPT format, run: | |
| python convert_original_stable_diffusion_to_diffusers.py --checkpoint_path <MODEL-NAME>.ckpt --device cpu --extract_ema --dump_path diffusers | |
| If your model is in SafeTensors format, run: | |
| python convert_original_stable_diffusion_to_diffusers.py --checkpoint_path <MODEL-NAME>.safetensors --from_safetensors --device cpu --extract_ema --dump_path diffusers | |
| Copy or move the resulting diffusers folder to: | |
| xxxxx/miniconda3/envs/python_playground/coreml-swift/convert | |
| cd xxxxx/miniconda3/envs/python_playground/coreml-swift/convert | |
| For 512x512 SPLIT run: python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --convert-text-encoder --convert-vae-decoder --convert-vae-encoder --unet-support-controlnet --model-version "./diffusers" --bundle-resources-for-swift-cli --attention-implementation SPLIT_EINSUM -o "./Split-512x512" | |
| For 512x512 ORIGINAL run: python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --convert-text-encoder --convert-vae-encoder --convert-vae-decoder --unet-support-controlnet --model-version "./diffusers" --bundle-resources-for-swift-cli --attention-implementation ORIGINAL --latent-h 64 --latent-w 64 --compute-unit CPU_AND_GPU -o "./Orig-512x512" | |
| The finished model files will either be in the Split-512x512/Resources folder or in the Orig-512x512/Resources folder | |
| Rename the folder with a good full model name and move it to the model store (xxxxx/miniconda3/envs/python_playground/coreml-swift//Models | |
| Everything else can be discarded. |