File size: 2,900 Bytes
0f94db5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
-------------------------------------------------------------------------------

Model folders (noted with -cn when built for use with ControlNet) live in the /models folder.

Each -cn model must have a /controlnet folder inside it that is a symlink to the real /controlnet folder.  You need to add this folder symlink to any model you download from my HF repo, after you unzip the model and put it in your /models folder.

(( The SwiftCLI scripts look for the ControlNet *.mlmodelc you are using inside the full model's /controlnet folder.  This is crazy.  It means every full model needs a folder inside symlinked to the /controlnet store folder.  That is how they set it up and I haven't looked into editing their scripts yet. ))

The input image(s) you want to use with the ControlNet need to be inside the /input folder.

Generated images will be saved to the /images folder.

There is a screencap of the folder structure I have that matches all these notes in the MISC section at my HF page.

-------------------------------------------------------------------------------

Inference Without ControlNet (Using any standard SD-1.5 or SD-2.1 type CoreML model):

Test your setup with this first, before trying with ControlNet.

    conda activate python_playground
    cd xxxxx/miniconda3/envs/python_playground/coreml-swift/ml-stable-diffusion

    swift run StableDiffusionSample "a photo of a cat" --seed 12 --guidance-scale 8.0 --step-count 24 --image-count 1 --scheduler dpmpp --compute-units cpuAndGPU --resource-path ../models/SD21 --output-path ../images

-------------------------------------------------------------------------------

Inference With ControlNet:

    conda activate python_playground
    cd /Users/jrittvo/miniconda3/envs/python_playground/coreml-swift/ml-stable-diffusion

    swift run StableDiffusionSample "a photo of a green yellow and red bird" --seed 12 --guidance-scale 8.0 --step-count 24 --image-count 1 --scheduler dpmpp --compute-units cpuAndGPU --resource-path ../models/SD15-cn --controlnet Canny --controlnet-inputs ../input/canny-bird.png --output-path ../images


--negative-prompt    "in quotes"
--seed               default is random
--guidance-scale     default is 7
--step-count         default is 50
--image-count        batch size, default is 1
--image              path to image for image2image
--strength           strength for image2image, 0.0 - 1.0, default 0.5
--scheduler          pndm or dpmpp (DPM++), default is pndm
--compute-units      all, cpuOnly, cpuAndGPU, cpuAndNeuralEngine
--resource-path      one of the model checkpoint .mlmodelc bundle folders
--controlnet         path/controlnet-model  <<path/controlnet-model-2>>      (no extension)
--controlnet-inputs  path/image.png  <<path/image-2.png>>      (same order as --controlnet)
--output-path        folder to save image(s)         (auto-named to: prompt.seed.final.png)
--help