eolecvk commited on
Commit
b2ecd51
1 Parent(s): 44fa7d6

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ thumbnail: "https://staticassetbucket.s3.us-west-1.amazonaws.com/outputv2_grid.png"
5
+ tags:
6
+ - stable-diffusion
7
+ - stable-diffusion-diffusers
8
+ - text-to-image
9
+ datasets:
10
+ - lambdalabs/naruto-blip-captions
11
+ ---
12
+
13
+ __Stable Diffusion fine tuned on Naruto by [Lambda Labs](https://lambdalabs.com/).__
14
+
15
+ Put in a text prompt and generate your own Naruto character, no "prompt engineering" required!
16
+
17
+ If you want to find out how to train your own Stable Diffusion variants, see this [example](https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning) from Lambda Labs.
18
+
19
+ ![pk1.jpg](https://staticassetbucket.s3.us-west-1.amazonaws.com/outputv2_grid.png)
20
+ > "Bill Gates with a hoodie", "John Oliver with Naruto style", "Hello Kitty with Naruto style", "Lebron James with a hat", "Mickael Jackson as a ninja", "Banksy Street art of ninja"
21
+
22
+ ## Usage
23
+
24
+ ```bash
25
+ !pip install diffusers==0.3.0
26
+ !pip install transformers scipy ftfy
27
+ ```
28
+
29
+ ```python
30
+ import torch
31
+ from diffusers import StableDiffusionPipeline
32
+ from torch import autocast
33
+
34
+ pipe = StableDiffusionPipeline.from_pretrained("lambdalabs/sd-naruto-diffusers", torch_dtype=torch.float16)
35
+ pipe = pipe.to("cuda")
36
+
37
+ prompt = "Yoda"
38
+ scale = 10
39
+ n_samples = 4
40
+
41
+ # Sometimes the nsfw checker is confused by the Naruto images, you can disable
42
+ # it at your own risk here
43
+ disable_safety = False
44
+
45
+ if disable_safety:
46
+ def null_safety(images, **kwargs):
47
+ return images, False
48
+ pipe.safety_checker = null_safety
49
+
50
+ with autocast("cuda"):
51
+ images = pipe(n_samples*[prompt], guidance_scale=scale).images
52
+
53
+ for idx, im in enumerate(images):
54
+ im.save(f"{idx:06}.png")
55
+ ```
56
+
57
+ ## Model description
58
+
59
+ Trained on [BLIP captioned Pokémon images](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions) using 2xA6000 GPUs on [Lambda GPU Cloud](https://lambdalabs.com/service/gpu-cloud) for around 30,000 step (about 12 hours, at a cost of about $20).
60
+
61
+ ## Links
62
+
63
+ - [Lambda Diffusers](https://github.com/LambdaLabsML/lambda-diffusers)
64
+ - [Captioned Naruto dataset](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions)
65
+ - [Model weights in Diffusers format](https://huggingface.co/lambdalabs/sd-naruto-diffusers)
66
+ - [Original model weights](https://huggingface.co/justinpinkney/pokemon-stable-diffusion)
67
+ - [Training code](https://github.com/justinpinkney/stable-diffusion)
68
+
69
+ Trained by Eole Cervenka after the work of [Justin Pinkney](justinpinkney.com) ([@Buntworthy](https://twitter.com/Buntworthy)) at [Lambda Labs](https://lambdalabs.com/).