File size: 2,624 Bytes
b6f78b1
 
 
 
 
 
 
 
 
 
 
 
 
880dcdc
b6f78b1
880dcdc
b6f78b1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eb05d81
 
b6f78b1
4445a64
b6f78b1
6ea80c4
b4c2fdf
b6f78b1
 
 
 
eb05d81
b6f78b1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fa4118c
b6f78b1
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81

---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- image-to-image
- diffusers
- controlnet
- controllora
---
    
# ControlLoRA - Face Landmarks Version

ControlLoRA is a neural network structure extended from Controlnet to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlLoRA conditioned on Face Landmarks.

ControlLoRA uses the same structure as Controlnet. But its core weight comes from UNet, unmodified. Only hint image encoding layers, linear lora layers and conv2d lora layers used in weight offset are trained.

The main idea is from my [ControlLoRA](https://github.com/HighCWu/ControlLoRA) and sdxl [control-lora](https://huggingface.co/stabilityai/control-lora).

## Example

1. Clone ControlLoRA from [Github](https://github.com/HighCWu/control-lora-v2):
```sh
$ git clone https://github.com/HighCWu/control-lora-v2
```

2. Enter the repo dir:
```sh
$ cd control-lora-v2
```

3. Run code:
```py
from PIL import Image
from diffusers import StableDiffusionControlNetPipeline, UNet2DConditionModel, UniPCMultistepScheduler
import torch
from PIL import Image
from models.controllora import ControlLoRAModel

image = Image.open('<Your Conditioning Image Path>')

base_model = "runwayml/stable-diffusion-v1-5"

unet = UNet2DConditionModel.from_pretrained(
    base_model, subfolder="unet", torch_dtype=torch.float16
)
controllora = ControlLoRAModel.from_pretrained(
    "HighCWu/sd-controllora-face-landmarks", torch_dtype=torch.float16
)
controllora.tie_weights(unet)

pipe = StableDiffusionControlNetPipeline.from_pretrained(
    base_model, unet=unet, controlnet=controllora, safety_checker=None, torch_dtype=torch.float16
)

pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)

# Remove if you do not have xformers installed
# see https://huggingface.co/docs/diffusers/v0.13.0/en/optimization/xformers#installing-xformers
# for installation instructions
pipe.enable_xformers_memory_efficient_attention()

pipe.enable_model_cpu_offload()

image = pipe("Girl smiling, professional dslr photograph, high quality", image, num_inference_steps=20).images[0]

image.show()
```

You can find some example images below.

prompt: High-quality close-up dslr photo of man wearing a hat with trees in the background
![images_0)](./images_0.png)
prompt: Girl smiling, professional dslr photograph, dark background, studio lights, high quality
![images_1)](./images_1.png)
prompt: Portrait of a clown face, oil on canvas, bittersweet expression
![images_2)](./images_2.png)