justinpinkney commited on
Commit
5670862
1 Parent(s): 20cbafc

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -0
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ thumbnail: "https://s3.amazonaws.com/moonup/production/uploads/1663756797814-62bd5f951e22ec84279820e8.png"
5
+ tags:
6
+ - stable-diffusion
7
+ - stable-diffusion-diffusers
8
+ - text-to-image
9
+ datasets:
10
+ - lambdalabs/pokemon-blip-captions
11
+ ---
12
+
13
+ __Stable Diffusion fine tuned on Pokémon by [Lambda Labs](https://lambdalabs.com/).__
14
+
15
+ Put in a text prompt and generate your own Pokémon character, no "prompt engineering" required!
16
+
17
+ If you want to find out how to train your own Stable Diffusion variants, see this [example](https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning) from Lambda Labs.
18
+
19
+
20
+ ![image.png](https://s3.amazonaws.com/moonup/production/uploads/1663756797814-62bd5f951e22ec84279820e8.png)
21
+
22
+ > Girl with a pearl earring, Cute Obama creature, Donald Trump, Boris Johnson, Totoro, Hello Kitty
23
+
24
+ ## Usage
25
+
26
+ ```bash
27
+ !pip install diffusers==0.3.0
28
+ !pip install transformers scipy ftfy
29
+ ```
30
+
31
+ ```python
32
+ import torch
33
+ from diffusers import StableDiffusionPipeline
34
+ from torch import autocast
35
+
36
+ pipe = StableDiffusionPipeline.from_pretrained("lambdalabs/sd-pokemon-diffusers", torch_dtype=torch.float16)
37
+ pipe = pipe.to("cuda")
38
+
39
+ prompt = "Yoda"
40
+ scale = 10
41
+ n_samples = 4
42
+
43
+ # Sometimes the nsfw checker is confused by the Pokémon images, you can disable
44
+ # it at your own risk here
45
+ disable_safety = False
46
+
47
+ if disable_safety:
48
+ def null_safety(images, **kwargs):
49
+ return images, False
50
+ pipe.safety_checker = null_safety
51
+
52
+ with autocast("cuda"):
53
+ images = pipe(n_samples*[prompt], guidance_scale=scale).images
54
+
55
+ for idx, im in enumerate(images):
56
+ im.save(f"{idx:06}.png")
57
+ ```
58
+
59
+ ## Model description
60
+
61
+ Trained on [BLIP captioned Pokémon images](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) using 2xA6000 GPUs on [Lambda GPU Cloud](https://lambdalabs.com/service/gpu-cloud) for around 15,000 step (about 6 hours, at a cost of about $10).
62
+
63
+ ## Links
64
+
65
+ - [Lambda Diffusers](https://github.com/LambdaLabsML/lambda-diffusers)
66
+ - [Captioned Pokémon dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions)
67
+ - [Model weights in Diffusers format](https://huggingface.co/lambdalabs/sd-pokemon-diffusers)
68
+ - [Original model weights](https://huggingface.co/justinpinkney/pokemon-stable-diffusion)
69
+ - [Training code](https://github.com/justinpinkney/stable-diffusion)
70
+
71
+ Trained by [Justin Pinkney](justinpinkney.com) ([@Buntworthy](https://twitter.com/Buntworthy)) at [Lambda Labs](https://lambdalabs.com/).