Edit model card

use the following code to generate image

import os
from diffusers import StableDiffusionPipeline
import torch
import random

prompt="可爱的,黄色的老鼠,小精灵,卡通"
seed=random.randint(1,2147483647)
print(seed)
pipe = StableDiffusionPipeline.from_pretrained("IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Anime-Chinese-v0.1",
                                               torch_dtype=torch.float16,)
model_path = "miluELK/Taiyi-sd-pokemon-LoRA-zh-512-v2"

pipe.unet.load_attn_procs(model_path)
pipe.to("cuda")
generator = torch.Generator("cuda").manual_seed(seed)
pipe.safety_checker = lambda images, clip_input: (images, False)
image = pipe(prompt, generator=generator, num_inference_steps=50, guidance_scale=7.5).images[0]
image.save("./" + prompt + ".png")
image

LoRA text2image fine-tuning - miluELK/Taiyi-sd-pokemon-LoRA-zh-512-v2

These are LoRA adaption weights for IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Anime-Chinese-v0.1. The weights were fine-tuned on the svjack/pokemon-blip-captions-en-zh dataset. You can find some example images in the following.

img_1 img_2 img_3 img_4

Downloads last month
0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Adapter for