HighCWu commited on
Commit
b6f78b1
1 Parent(s): 2a30e9c

End of training

Browse files
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ images_0.png filter=lfs diff=lfs merge=lfs -text
37
+ images_1.png filter=lfs diff=lfs merge=lfs -text
38
+ images_2.png filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: creativeml-openrail-m
4
+ base_model: runwayml/stable-diffusion-v1-5
5
+ tags:
6
+ - stable-diffusion
7
+ - stable-diffusion-diffusers
8
+ - image-to-image
9
+ - diffusers
10
+ - controlnet
11
+ - controllora
12
+ ---
13
+
14
+ # ControlLoRA - HighCWu/sd-controllora-face-landmarks
15
+
16
+ ControlLoRA is a neural network structure extended from Controlnet to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlLoRA conditioned on Unknown Input.
17
+
18
+ ControlLoRA uses the same structure as Controlnet. But its core weight comes from UNet, unmodified. Only hint image encoding layers, linear lora layers and conv2d lora layers used in weight offset are trained.
19
+
20
+ The main idea is from my [ControlLoRA](https://github.com/HighCWu/ControlLoRA) and sdxl [control-lora](https://huggingface.co/stabilityai/control-lora).
21
+
22
+ ## Example
23
+
24
+ 1. Clone ControlLoRA from [Github](https://github.com/HighCWu/control-lora-v2):
25
+ ```sh
26
+ $ git clone https://github.com/HighCWu/control-lora-v2
27
+ ```
28
+
29
+ 2. Enter the repo dir:
30
+ ```sh
31
+ $ cd control-lora-v2
32
+ ```
33
+
34
+ 3. Run code:
35
+ ```py
36
+ from PIL import Image
37
+ from diffusers import StableDiffusionControlNetPipeline, UNet2DConditionModel, UniPCMultistepScheduler
38
+ import torch
39
+ from PIL import Image
40
+ from diffusers.utils import load_image
41
+ from models.controllora import ControlLoRAModel
42
+
43
+ image = Image.open('<Your Conditioning Image Path>')
44
+
45
+ unet = UNet2DConditionModel.from_pretrained(
46
+ "runwayml/stable-diffusion-v1-5", subfolder="unet"
47
+ )
48
+ controllora = ControlLoRA.from_pretrained(
49
+ "HighCWu/HighCWu/sd-controllora-face-landmarks", torch_dtype=torch.float16
50
+ )
51
+ controllora.tie_weights(unet)
52
+
53
+ pipe = StableDiffusionControlNetPipeline.from_pretrained(
54
+ "runwayml/stable-diffusion-v1-5", unet=unet, controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16
55
+ )
56
+
57
+ pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
58
+
59
+ # Remove if you do not have xformers installed
60
+ # see https://huggingface.co/docs/diffusers/v0.13.0/en/optimization/xformers#installing-xformers
61
+ # for installation instructions
62
+ pipe.enable_xformers_memory_efficient_attention()
63
+
64
+ pipe.enable_model_cpu_offload()
65
+
66
+ image = pipe("Girl smiling, professional dslr photograph, high quality", image, num_inference_steps=20).images[0]
67
+
68
+ image.show()
69
+ ```
70
+
71
+ Here are some validation images:
72
+
73
+ You can find some example images below.
74
+ prompt: High-quality close-up dslr photo of man wearing a hat with trees in the background
75
+ ![images_0)](./images_0.png)
76
+ prompt: Girl smiling, professional dslr photograph, dark background, studio lights, high quality
77
+ ![images_1)](./images_1.png)
78
+ prompt: Portrait of a clown face, oil on canvas, bittersweet expression
79
+ ![images_2)](./images_2.png)
80
+
config.json ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "ControlLoRAModel",
3
+ "_diffusers_version": "0.21.0.dev0",
4
+ "_name_or_path": "output/sd-controllora-face-landmarks\\checkpoint-75000",
5
+ "act_fn": "silu",
6
+ "addition_embed_type": null,
7
+ "addition_embed_type_num_heads": 64,
8
+ "addition_time_embed_dim": null,
9
+ "attention_head_dim": 8,
10
+ "block_out_channels": [
11
+ 320,
12
+ 640,
13
+ 1280,
14
+ 1280
15
+ ],
16
+ "class_embed_type": null,
17
+ "conditioning_channels": 3,
18
+ "conditioning_embedding_out_channels": [
19
+ 16,
20
+ 32,
21
+ 96,
22
+ 256
23
+ ],
24
+ "controlnet_conditioning_channel_order": "rgb",
25
+ "cross_attention_dim": 768,
26
+ "down_block_types": [
27
+ "CrossAttnDownBlock2D",
28
+ "CrossAttnDownBlock2D",
29
+ "CrossAttnDownBlock2D",
30
+ "DownBlock2D"
31
+ ],
32
+ "downsample_padding": 1,
33
+ "encoder_hid_dim": null,
34
+ "encoder_hid_dim_type": null,
35
+ "flip_sin_to_cos": true,
36
+ "freq_shift": 0,
37
+ "global_pool_conditions": false,
38
+ "in_channels": 4,
39
+ "layers_per_block": 2,
40
+ "lora_conv2d_rank": 32,
41
+ "lora_linear_rank": 32,
42
+ "mid_block_scale_factor": 1,
43
+ "norm_eps": 1e-05,
44
+ "norm_num_groups": 32,
45
+ "num_attention_heads": null,
46
+ "num_class_embeds": null,
47
+ "only_cross_attention": false,
48
+ "projection_class_embeddings_input_dim": null,
49
+ "resnet_time_scale_shift": "default",
50
+ "transformer_layers_per_block": 1,
51
+ "upcast_attention": false,
52
+ "use_linear_projection": false
53
+ }
diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67d584ae2cba78dc9eca4a5da7f00e801d004f1625bacf3408d47aa54db8bcc5
3
+ size 104845256
image_control.png ADDED
images_0.png ADDED

Git LFS Details

  • SHA256: 2c3720e4572f385a61faa26563cca22df7f3e2ecf2576a74db9859bb3912b1cc
  • Pointer size: 132 Bytes
  • Size of remote file: 2 MB
images_1.png ADDED

Git LFS Details

  • SHA256: 6b45e277f8dc916b7595794b3300baa75e085062066da861de8d2c6d99607596
  • Pointer size: 132 Bytes
  • Size of remote file: 2.18 MB
images_2.png ADDED

Git LFS Details

  • SHA256: ec82655043de7374bba777f45e17e18f32e7b9ed06a40f99177141d2f2b01ede
  • Pointer size: 132 Bytes
  • Size of remote file: 2.03 MB