jrrjrr commited on
Commit
0b1bf3a
1 Parent(s): b43fd2e

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -9
README.md CHANGED
@@ -9,19 +9,19 @@ tags:
9
 
10
  ## For use with a Swift app like [**MOCHI DIFFUSION**](https://github.com/godly-devotion/MochiDiffusion) or the SwiftCLI
11
 
12
- The SD models in this repo are all "Original" and built for CPU and GPU. They are each for the output size noted. They are fp16, with the standard SD-1.5 VAE embedded.
13
 
14
  The Stable Diffusion v1.5 model and the other SD 1.5 type models contain both the standard Unet and the ControlledUnet used for a ControlNet pipeline. The correct one will be used automatically based on whether a ControlNet is enabled or not.
15
 
16
  They have VAEEncoder.mlmodelc bundles that allow Image2Image to operate correctly at the noted resolutions, when used with a current Swift CLI pipeline or a current GUI built with ml-stable-diffusion 0.4.0 or ml-stable-diffusion 1.0.0, such as [**MOCHI DIFFUSION**](https://github.com/godly-devotion/MochiDiffusion) 3.2, 4.0, or later.
17
 
18
- All of the ControlNet models in this repo are "Original" ones, built for CPU and GPU compute units (cpuAndGPU) and for SD-1.5 type models. They will not work with SD-2.1 type models. The zip files each have a set of models at 4 resolutions. "Split-Einsum" versions for use with the Neural Engine (CPU and NE) are available at a different repo. A link to that repo is at the bottom of this page.
19
 
20
  All of the models in this repo work with Swift and the apple/ml-stable-diffusion pipeline (release 0.4.0 or 1.0.0). They were not built for, and will not work with, a Python Diffusers pipeline. They need [**ml-stable-diffusion**](https://github.com/apple/ml-stable-diffusion) for command line use, or a Swift app that supports ControlNet, such as the new (June 2023) [**MOCHI DIFFUSION**](https://github.com/godly-devotion/MochiDiffusion) 4.0 version.
21
 
22
  The full SD models are in the "SD" folder of this repo. They are in subfolders by model name and individually zipped for a particular resolution. They need to be unzipped for use after downloading.
23
 
24
- The ControlNet model files are in the "CN" folder of this repo. They are zipped and need to be unzipped after downloading. Each zip holds a set of 4 resolutions for that ControlNet type, built for 512x512, 512x768, 768x512 and 768x768.
25
 
26
  There is also a "MISC" folder that has text files with some notes and a screencap of my directory structure. These are provided for those who want to convert models themselves and/or run the models with a SwiftCLI. The notes are not perfect, and may be out of date if any of the Python or CoreML packages referenced have been updated recently. You can open a Discussion here if you need help with any of the "MISC" items.
27
 
@@ -29,7 +29,7 @@ For command line use, the "MISC" notes cover setting up a miniconda3 environment
29
 
30
  If you are using a GUI like [**MOCHI DIFFUSION**](https://github.com/godly-devotion/MochiDiffusion) 4.0, the app will most likely guide you to the correct location/arrangement for your ConrolNet model folder.
31
 
32
- Please note that when you unzip the ControlNet files (for example Canny.zip) from this repo, they will unzip into a folder, with the actual four model files inside that folder. This folder is just for the zipping process. **What you want to move into your ControlNet model folder in Mochi Diffusion will be the individual files, not the folder they unzip into.** To make things even more confusing, on some Mac systems, an individual ControlNet model file, for example Canny-5x5.mlmodelc, will appear in Finder as a folder, not a file. You want to move the Canny-5x5.mlmodelc file or folder, and the 3 other .mlmodelc files or folders, into your ControlNet store folder. Don't move the Canny folder. This is different from base models, where you do want to be moving the folder that the downloaded zip file unzips into. See the images [**here**](https://huggingface.co/jrrjrr/CoreML-Models-For-ControlNet/blob/main/CN/-Settings.jpg) and [**here**](https://huggingface.co/jrrjrr/CoreML-Models-For-ControlNet/blob/main/CN/-Folders.jpg) for an example of how my folders are set up for Mochi Diffusion.
33
 
34
  The sizes noted for all model type inputs/outputs are WIDTH x HEIGHT. A 512x768 is "portrait" orientation and a 768x512 is "landscape" orientation.
35
 
@@ -45,7 +45,7 @@ Each folder contains 4 zipped model files, output sizes as indicated: 512x512, 5
45
  - Stable Diffusion v1.5, "Original"
46
 
47
  ## ControlNet Models - All Current SD-1.5-Type ControlNet Models
48
- Each zip file contains a set of 4 resolutions: 512x512, 512x768, 768x512 and 768x768
49
  - Canny -- Edge Detection, Outlines As Input
50
  - Depth -- Reproduces Depth Relationships From An Image
51
  - InPaint -- Use Masks To Define And Modify An Area (not sure how this works)
@@ -59,7 +59,4 @@ Each zip file contains a set of 4 resolutions: 512x512, 512x768, 768x512 and 768
59
  - Segmentation -- Find And Reuse Distinct Areas
60
  - Shuffle -- Find And Reorder Major Elements
61
  - SoftEdge -- Find And Reuse Soft Edges
62
- - Tile -- Subtle Variations Within Batch Run
63
-
64
- ## A Set Of 14 ControlNet Models Built For Split-Einsum Are Available [**HERE**](https://huggingface.co/atatakun/CoreML-Models-For-ControlNet1-1-NE/tree/main)
65
- They are divided into 3 zip files: controlnet5x5.NE.zip, controlnet5x5.NE2.zip, and controlnet5x5.NE3.zip
 
9
 
10
  ## For use with a Swift app like [**MOCHI DIFFUSION**](https://github.com/godly-devotion/MochiDiffusion) or the SwiftCLI
11
 
12
+ The SD models in this repo are all "Original" and built for CPU and GPU. They are each for the output size noted. They are fp16, with the standard SD-1.5 VAE embedded. "Split-Einsum" versions that support ControlNet are being added in individual model repos listed on the coreml orginzation landing page.
13
 
14
  The Stable Diffusion v1.5 model and the other SD 1.5 type models contain both the standard Unet and the ControlledUnet used for a ControlNet pipeline. The correct one will be used automatically based on whether a ControlNet is enabled or not.
15
 
16
  They have VAEEncoder.mlmodelc bundles that allow Image2Image to operate correctly at the noted resolutions, when used with a current Swift CLI pipeline or a current GUI built with ml-stable-diffusion 0.4.0 or ml-stable-diffusion 1.0.0, such as [**MOCHI DIFFUSION**](https://github.com/godly-devotion/MochiDiffusion) 3.2, 4.0, or later.
17
 
18
+ The ControlNet models in this repo have both "Original" and "Split-Einsum" versions, all built for SD-1.5 type models. They will not work with SD-2.1 type models. The zip files with "SE" each have a single model for "Split-Einsum". The other zip files each set of "Original" models at 4 resolutions.
19
 
20
  All of the models in this repo work with Swift and the apple/ml-stable-diffusion pipeline (release 0.4.0 or 1.0.0). They were not built for, and will not work with, a Python Diffusers pipeline. They need [**ml-stable-diffusion**](https://github.com/apple/ml-stable-diffusion) for command line use, or a Swift app that supports ControlNet, such as the new (June 2023) [**MOCHI DIFFUSION**](https://github.com/godly-devotion/MochiDiffusion) 4.0 version.
21
 
22
  The full SD models are in the "SD" folder of this repo. They are in subfolders by model name and individually zipped for a particular resolution. They need to be unzipped for use after downloading.
23
 
24
+ The ControlNet model files are in the "CN" folder of this repo. They are zipped and need to be unzipped after downloading. The larger zips hold "Original" types at 512x512, 512x768, 768x512 and 768x768. The smaller zips with "SE" ave a single model for "Split-Einsum".
25
 
26
  There is also a "MISC" folder that has text files with some notes and a screencap of my directory structure. These are provided for those who want to convert models themselves and/or run the models with a SwiftCLI. The notes are not perfect, and may be out of date if any of the Python or CoreML packages referenced have been updated recently. You can open a Discussion here if you need help with any of the "MISC" items.
27
 
 
29
 
30
  If you are using a GUI like [**MOCHI DIFFUSION**](https://github.com/godly-devotion/MochiDiffusion) 4.0, the app will most likely guide you to the correct location/arrangement for your ConrolNet model folder.
31
 
32
+ Please note that when you unzip the "Originl" ControlNet files (for example Canny.zip) from this repo, they will unzip into a folder, with the actual four model files inside that folder. This folder is just for the zipping process. **What you want to move into your ControlNet model folder in Mochi Diffusion will be the individual files, not the folder they unzip into.** The "Split-Einsum" zips just have a single file and don't use a holding folder. To make things even more confusing, on some Mac systems, an individual ControlNet model file, for example Canny-5x5.mlmodelc, will appear in Finder as a folder, not a file. You want to move the Canny-5x5.mlmodelc file or folder, and the 3 other .mlmodelc files or folders, into your ControlNet store folder. Don't move the Canny folder. This is different from base models, where you do want to be moving the folder that the downloaded zip file unzips into. See the images [**here**](https://huggingface.co/jrrjrr/CoreML-Models-For-ControlNet/blob/main/CN/-Settings.jpg) and [**here**](https://huggingface.co/jrrjrr/CoreML-Models-For-ControlNet/blob/main/CN/-Folders.jpg) for an example of how my folders are set up for Mochi Diffusion.
33
 
34
  The sizes noted for all model type inputs/outputs are WIDTH x HEIGHT. A 512x768 is "portrait" orientation and a 768x512 is "landscape" orientation.
35
 
 
45
  - Stable Diffusion v1.5, "Original"
46
 
47
  ## ControlNet Models - All Current SD-1.5-Type ControlNet Models
48
+ Each larger zip file contains a set of 4 "Original" types at resolutions of 512x512, 512x768, 768x512 and 768x768. Each smaller zip file, with the "SE" notation, contains a single "Split-Einsun" file.
49
  - Canny -- Edge Detection, Outlines As Input
50
  - Depth -- Reproduces Depth Relationships From An Image
51
  - InPaint -- Use Masks To Define And Modify An Area (not sure how this works)
 
59
  - Segmentation -- Find And Reuse Distinct Areas
60
  - Shuffle -- Find And Reorder Major Elements
61
  - SoftEdge -- Find And Reuse Soft Edges
62
+ - Tile -- Subtle Variations Within Batch Run