
- Prompt
- Change it to look like it's in the style of an impasto painting.

- Prompt
- change the setting to spring with blooming trees

- Prompt
- transform the setting to a stormy space
This is a Control LoRA for making small edits to images with the THUDM/CogView4-6B model.
Code: https://github.com/a-r-r-o-w/finetrainers
This is an experimental checkpoint and its poor generalization is well-known.
Inference code:
# For now, must use this branch of finetrainers: https://github.com/a-r-r-o-w/finetrainers/blob/f3e27cc39a2bc804cb373ea15522576e57f46d23/finetrainers/models/cogview4/control_specification.py
import torch
from diffusers import CogView4Pipeline
from diffusers.utils import load_image
from finetrainers.models.utils import _expand_linear_with_zeroed_weights
from finetrainers.patches import load_lora_weights
from finetrainers.patches.dependencies.diffusers.control import control_channel_concat
dtype = torch.bfloat16
device = torch.device("cuda")
generator = torch.Generator().manual_seed(0)
pipe = CogView4Pipeline.from_pretrained("THUDM/CogView4-6B", torch_dtype=dtype)
in_channels = pipe.transformer.config.in_channels
patch_channels = pipe.transformer.patch_embed.proj.in_features
pipe.transformer.patch_embed.proj = _expand_linear_with_zeroed_weights(pipe.transformer.patch_embed.proj, new_in_features=2 * patch_channels)
load_lora_weights(pipe, "finetrainers/CogView4-6B-Edit-LoRA-v0", "cogview4-lora")
pipe.set_adapters("cogview4-lora", 0.9)
pipe.to(device)
prompt = "Make the image look like it's from an ancient Egyptian mural."
control_image = load_image("examples/training/control/cogview4/omni_edit/validation_dataset/0.png")
height, width = 1024, 1024
with torch.no_grad():
latents = pipe.prepare_latents(1, in_channels, height, width, dtype, device, generator)
control_image = pipe.image_processor.preprocess(control_image, height=height, width=width)
control_image = control_image.to(device=device, dtype=dtype)
control_latents = pipe.vae.encode(control_image).latent_dist.sample(generator=generator)
control_latents = (control_latents - pipe.vae.config.shift_factor) * pipe.vae.config.scaling_factor
with control_channel_concat(pipe.transformer, ["hidden_states"], [control_latents], dims=[1]):
image = pipe(prompt, latents=latents, num_inference_steps=30, generator=generator).images[0]
image.save("output.png")
- Downloads last month
- 10