sd-naruto-diffusers / README.md
eolecvk's picture
Create README.md
b2ecd51
|
raw
history blame
2.51 kB
metadata
language:
  - en
thumbnail: https://staticassetbucket.s3.us-west-1.amazonaws.com/outputv2_grid.png
tags:
  - stable-diffusion
  - stable-diffusion-diffusers
  - text-to-image
datasets:
  - lambdalabs/naruto-blip-captions

Stable Diffusion fine tuned on Naruto by Lambda Labs.

Put in a text prompt and generate your own Naruto character, no "prompt engineering" required!

If you want to find out how to train your own Stable Diffusion variants, see this example from Lambda Labs.

pk1.jpg

"Bill Gates with a hoodie", "John Oliver with Naruto style", "Hello Kitty with Naruto style", "Lebron James with a hat", "Mickael Jackson as a ninja", "Banksy Street art of ninja"

Usage

!pip install diffusers==0.3.0
!pip install transformers scipy ftfy
import torch
from diffusers import StableDiffusionPipeline
from torch import autocast

pipe = StableDiffusionPipeline.from_pretrained("lambdalabs/sd-naruto-diffusers", torch_dtype=torch.float16)  
pipe = pipe.to("cuda")

prompt = "Yoda"
scale = 10
n_samples = 4

# Sometimes the nsfw checker is confused by the Naruto images, you can disable
# it at your own risk here
disable_safety = False

if disable_safety:
  def null_safety(images, **kwargs):
      return images, False
  pipe.safety_checker = null_safety

with autocast("cuda"):
  images = pipe(n_samples*[prompt], guidance_scale=scale).images

for idx, im in enumerate(images):
  im.save(f"{idx:06}.png")

Model description

Trained on BLIP captioned Pokémon images using 2xA6000 GPUs on Lambda GPU Cloud for around 30,000 step (about 12 hours, at a cost of about $20).

Links

Trained by Eole Cervenka after the work of Justin Pinkney (@Buntworthy) at Lambda Labs.