myst33dLoRA / README.md
myst33d's picture
Add generated example (#5)
ce3f807 verified
|
raw
history blame
1.53 kB
metadata
tags:
  - text-to-image
  - stable-diffusion
  - lora
  - diffusers
  - image-generation
  - safetensors
widget:
  - text: myst33d style, 1boy, furry, looking at viewer, city background
    output:
      url: images/example_3xclqf9vw.png
  - text: myst33d style, 1boy, furry, looking at viewer, city background
    output:
      url: images/example_2pcqsm7pb.png
  - text: >-
      myst33d style, 1boy, furry, looking at viewer, city background, white fur,
      collar
    output:
      url: images/example_71yz865w6.png
  - text: myst33d style, 1boy, furry, two-tone fur, paint splatter
    output:
      url: images/example_nlratlgnw.png
  - text: >-
      myst33d style, 1boy, furry, two-tone fur, paint splatter, colorful,
      rainbow hair
    output:
      url: images/example_zbnvrj5ev.png
base_model: stabilityai/stable-diffusion-xl-base-1.0

Myst33d Style LoRA

A barely working LoRA of my vaguely similar art style

Prompting

Use myst33d style to stylize anything, anime works well along with myst33d style

Using with diffusers

import torch
from diffusers import StableDiffusionXLPipeline
from diffusers.models import AutoencoderKL

pipe = StableDiffusionXLPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True)
pipe.load_lora_weights("myst33d/myst33dLoRA")
pipe.to("cuda")

image = pipe(
    prompt="myst33d style, 1boy, furry, looking at viewer, city background",
    num_inference_steps=30,
    guidance_scale=9,
).images[0]
# Do whatever with the image here