jrrjrr commited on
Commit
2812f99
1 Parent(s): f8e6d10

Update README.md

Browse files

Update README.md to reflect current converted model types. Specifically, adds references to ControlNet capable models, to be identified with a _cn suffix.

Also provides a link to a repo with replacement VAEEncoder.mlmodelc files that enable Image2Image when using models that were converted prior to the use of ml-stable-diffusion 0.4.0 or later in the conversion pipeline.

Does NOT include any reference to obtaining individual ControlNet models, such as Scribble or Canny, however this information could be added to this revision as well.

Files changed (1) hide show
  1. README.md +16 -8
README.md CHANGED
@@ -17,20 +17,28 @@ By organizing Core ML models in one place, it will be easier to find them and fo
17
 
18
  <hr>
19
 
20
- ## Conversion Flags
21
 
22
- The models were converted using the following flags:\
23
- `--convert-vae-decoder --convert-vae-encoder --convert-unet --convert-text-encoder --bundle-resources-for-swift-cli --attention-implementation {SPLIT_EINSUM or ORIGINAL}`
 
 
 
 
 
24
 
25
- ## Model version: `split_einsum` VS `original`
26
 
27
- Depending on what compute unit you select, you will need to use the correct model version:
28
- - `split_einsum` is compatible with all compute unit
29
- - `original` is only compatible with CPU & GPU
30
 
31
  ## Usage
32
 
33
- Once the chosen model has been downloaded, simply unzip it to use it.
 
 
 
 
 
34
 
35
  <hr>
36
 
 
17
 
18
  <hr>
19
 
20
+ ## Model Types
21
 
22
+ Model files with `split_einsum` in the file name are compatible with all compute units.
23
+
24
+ Model files with `original` in the file name are only compatible with CPU & GPU.
25
+
26
+ Model files with a `no-i2i` suffix in the file name only work for Text2Image.
27
+
28
+ Models with a `cn` suffix in the file name (or in their repo name) will work for Text2Image, Image2Image and ControlNet.
29
 
30
+ Model files with neither a `no-i2i` nor a `cn` suffix in the file name will work for Text2Image and Image2Image.
31
 
32
+ If you are using Mochi Diffusion v3.2, v4.0 or later versions, some model files with neither a `no-i2i` nor a `cn` suffix in the file name might need a simple modification to enable Image2Image to work correctly. Please go [HERE](https://huggingface.co/jrrjrr/VAEEncoder_Set_For_Image2Image_With_Mochi_Diffusion_v3.2) for more information.
 
 
33
 
34
  ## Usage
35
 
36
+ Once the chosen model has been downloaded, simply unzip it and put it in your model folder to use it.
37
+
38
+ ## Conversion Flags
39
+
40
+ The models were converted using the following flags:\
41
+ `--convert-vae-decoder --convert-vae-encoder --convert-unet --unet-support-controlnet --convert-text-encoder --bundle-resources-for-swift-cli --attention-implementation {SPLIT_EINSUM or ORIGINAL}`
42
 
43
  <hr>
44