GuiyeC commited on
Commit
ce8611e
1 Parent(s): d0a51b3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -10
README.md CHANGED
@@ -2,7 +2,32 @@
2
  license: creativeml-openrail-m
3
  ---
4
 
5
- ## <a name="converting-models-to-coreml"></a> Converting Models to Core ML
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
7
  **Step 1:** Create a Python environment and install dependencies:
8
 
@@ -13,27 +38,43 @@ cd /path/to/unziped/scripts/location
13
  pip install -e .
14
  ```
15
 
16
- **Step 2:** Log in to or register for your [Hugging Face account](https://huggingface.co), generate a [User Access Token](https://huggingface.co/settings/tokens) and use this token to set up Hugging Face API access by running `huggingface-cli login` in a Terminal window.
17
 
18
- **Step 3a:** Navigate to the version of Stable Diffusion that you would like to use on [Hugging Face Hub](https://huggingface.co/models?search=stable-diffusion) and accept its Terms of Use. The default model version is [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4). The model version may be changed by the user as described in the next step.
 
19
 
20
- **Step 3b:** You may also convert an existing model from a ckpt package by using the `convert_original_stable_diffusion_to_diffusers.py` script. After converting it you can continue using the `--model-location` argument to indicate the location of your converted model.
 
 
 
 
 
 
 
 
21
 
22
- **Step 4:** Execute the following command from the Terminal to generate Core ML model files (`.mlpackage`) and Guernika compatible model.
23
 
24
  ```shell
25
- python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --convert-text-encoder --convert-vae-encoder --convert-vae-decoder --convert-safety-checker -o <output-mlpackages-directory> --bundle-resources-for-swift-cli
26
  ```
27
 
28
- **WARNING:** This command may download several GB worth of PyTorch checkpoints from Hugging Face.
 
 
 
29
 
30
- This generally takes 15-20 minutes on an M1 MacBook Pro. Upon successful execution, the 4 neural network models that comprise Stable Diffusion will have been converted from PyTorch to Core ML (`.mlpackage`) and saved into the specified `<output-mlpackages-directory>`. Some additional notable arguments:
31
 
32
  - `--model-version`: The model version defaults to [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4). Developers may specify other versions that are available on [Hugging Face Hub](https://huggingface.co/models?search=stable-diffusion), e.g. [stabilityai/stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) & [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5).
33
- -
34
  - `--model-location`: The location of a local model defaults to `None`.
 
 
 
 
35
 
36
- - `--bundle-resources-for-swift-cli`: Compiles all 4 models and bundles them along with necessary resources for text tokenization into `<output-mlpackages-directory>/Resources` which should provided as input to the Swift package. This flag is not necessary for the diffusers-based Python pipeline.
37
 
38
  - `--chunk-unet`: Splits the Unet model in two approximately equal chunks (each with less than 1GB of weights) for mobile-friendly deployment. This is **required** for ANE deployment on iOS and iPadOS. This is not required for macOS. Swift CLI is able to consume both the chunked and regular versions of the Unet model but prioritizes the former. Note that chunked unet is not compatible with the Python pipeline because Python pipeline is intended for macOS only. Chunking is for on-device deployment with Swift only.
39
 
 
2
  license: creativeml-openrail-m
3
  ---
4
 
5
+ # Guernika
6
+
7
+ This repository contains Guernika compatible models and instructions to convert existing models.
8
+
9
+ ## <a name="converting-models-to-guernika"></a> Converting Models to Guernika
10
+
11
+ ### <a name="converting-models-advanced"></a> Easy mode
12
+
13
+ **Step 1:** Download and install [`Guernika Model Converter`](https://huggingface.co/Guernika/CoreMLStableDiffusion/resolve/main/GuernikaModelConverter.pkg).
14
+
15
+ **Step 2:** Launch `Guernika Model Converter` from your `Applications` folder, this app may take a few seconds to load.
16
+
17
+ **Step 3:** Once the app has loaded you will be able to select what model you want to convert:
18
+
19
+ - You can input the model identifier (e.g. CompVis/stable-diffusion-v1-4) to download from Hugging Face. You may have to log in to or register for your [Hugging Face account](https://huggingface.co), generate a [User Access Token](https://huggingface.co/settings/tokens) and use this token to set up Hugging Face API access by running `huggingface-cli login` in a Terminal window.
20
+
21
+ - You can select a local model from your machine: `Select local model`
22
+
23
+ - You can select a local .CKPT model from your machine: `Select CKPT`
24
+
25
+ **Step 4:** Once you've chosen the model you want to convert you can choose what modules to convert and/or if you want to chunk the UNet module (recommended for iOS/iPadOS devices).
26
+
27
+ **Step 5:** Once you're happy with your selection click `Convert to Guernika` and wait for the app to complete conversion.
28
+ **WARNING:** This command may download several GB worth of PyTorch checkpoints from Hugging Face and may take a long time to complete (15-20 minutes on an M1 machine).
29
+
30
+ ### <a name="converting-models-advanced"></a> Advance mode
31
 
32
  **Step 1:** Create a Python environment and install dependencies:
33
 
 
38
  pip install -e .
39
  ```
40
 
41
+ **Step 2:** Choose what model you want to convert:
42
 
43
+ **Huggin Face model:** Log in to or register for your [Hugging Face account](https://huggingface.co), generate a [User Access Token](https://huggingface.co/settings/tokens) and use this token to set up Hugging Face API access by running `huggingface-cli login` in a Terminal window.
44
+ Once you know what model you want to convert and have accepted its Terms of Use, run the following command replacing `<model-identifier>` with the desired model's identifier:
45
 
46
+ ```shell
47
+ python -m python_coreml_stable_diffusion.torch2coreml --model-version <model-identifier> -o <output-directory> --convert-unet --convert-text-encoder --convert-vae-encoder --convert-vae-decoder --convert-safety-checker --bundle-resources-for-guernika --clean-up-mlpackages
48
+ ```
49
+
50
+ **Local model:** Run the following command replacing `<model-location>` with the desired model's location path:
51
+
52
+ ```shell
53
+ python -m python_coreml_stable_diffusion.torch2coreml --model-location <model-location> -o <output-directory> --convert-unet --convert-text-encoder --convert-vae-encoder --convert-vae-decoder --convert-safety-checker --bundle-resources-for-guernika --clean-up-mlpackages
54
+ ```
55
 
56
+ **Local CKPT:** Run the following command replacing `<checkpoint-path>` with the desired CKPT's location path:
57
 
58
  ```shell
59
+ python -m python_coreml_stable_diffusion.torch2coreml --checkpoint-path <checkpoint-path> -o <output-directory> --convert-unet --convert-text-encoder --convert-vae-encoder --convert-vae-decoder --convert-safety-checker --bundle-resources-for-guernika --clean-up-mlpackages
60
  ```
61
 
62
+ **WARNING:** These commands may download several GB worth of PyTorch checkpoints from Hugging Face.
63
+
64
+ This generally takes 15-20 minutes on an M1 MacBook Pro. Upon successful execution, the neural network models that comprise Stable Diffusion's model will have been converted from PyTorch to Guernika and saved into the specified `<output-directory>`.
65
+
66
 
67
+ #### <a name="converting-models--arguments"></a> Notable arguments
68
 
69
  - `--model-version`: The model version defaults to [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4). Developers may specify other versions that are available on [Hugging Face Hub](https://huggingface.co/models?search=stable-diffusion), e.g. [stabilityai/stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) & [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5).
70
+
71
  - `--model-location`: The location of a local model defaults to `None`.
72
+
73
+ - `--checkpoint-path`: The location of a local .CKPT model defaults to `None`.
74
+
75
+ - `--bundle-resources-for-guernika`: Compiles all 4 models and bundles them along with necessary resources for text tokenization into `<output-mlpackages-directory>/Resources` which should provided as input to the Swift package. This flag is not necessary for the diffusers-based Python pipeline.
76
 
77
+ - `--clean-up-mlpackages`: Cleans up created .mlpackages leaving only the compiled model.
78
 
79
  - `--chunk-unet`: Splits the Unet model in two approximately equal chunks (each with less than 1GB of weights) for mobile-friendly deployment. This is **required** for ANE deployment on iOS and iPadOS. This is not required for macOS. Swift CLI is able to consume both the chunked and regular versions of the Unet model but prioritizes the former. Note that chunked unet is not compatible with the Python pipeline because Python pipeline is intended for macOS only. Chunking is for on-device deployment with Swift only.
80