Norod78 commited on
Commit
e99b4f9
1 Parent(s): e865064

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -16
README.md CHANGED
@@ -19,6 +19,25 @@ If you want more details on how to generate your own blip cpationed dataset see
19
 
20
  Training was done using a slightly modified version of Hugging-Face's text to image training [example script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py)
21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  ```py
23
  from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
24
  import torch
@@ -44,22 +63,6 @@ image = pipe(prompt).images[0]
44
  image.save("astronaut_rides_horse.png")
45
  ```
46
 
47
- ## About
48
-
49
- Put in a text prompt and generate cartoony/simpsony images
50
-
51
- **A beautiful hungry demon girl, John Philip Falter, Very detailed painting, Mark Ryden**
52
-
53
- ![A beautiful hungry demon girl, John Philip Falter, Very detailed painting, Mark Ryden](https://huggingface.co/Norod78/sd-simpsons-model/raw/main/examples/00496-2202810362-A%20beautiful%20hungry%20demon%20girl,%20John%20Philip%20Falter,%20Very%20detailed%20painting,%20Mark%20Ryden.jpg)
54
-
55
- **Gal Gadot, cartoon**
56
-
57
- ![Gal Gadot, cartoon](https://huggingface.co/Norod78/sd-simpsons-model/raw/main/examples/00323-2574793241-Gal%20Gadot,%20cartoon.jpg)
58
-
59
- ## More examples
60
-
61
- The [examples](https://huggingface.co/Norod78/sd-simpsons-model/tree/main/examples) folder contains a few images generated by this model's ckpt file using [stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) which means their EXIF info contain the parameter used to generate them
62
-
63
  ## Dataset and Training
64
 
65
  Finetuned for 10,000 iterations upon [Runway ML's Stable-Diffusion v1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5) on [BLIP captioned Simpsons images](https://huggingface.co/datasets/Norod78/simpsons-blip-captions) using 1xA5000 GPU on my home desktop computer
 
19
 
20
  Training was done using a slightly modified version of Hugging-Face's text to image training [example script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py)
21
 
22
+ ## About
23
+
24
+ Put in a text prompt and generate cartoony/simpsony images
25
+
26
+ **A beautiful hungry demon girl, John Philip Falter, Very detailed painting, Mark Ryden**
27
+
28
+ ![A beautiful hungry demon girl, John Philip Falter, Very detailed painting, Mark Ryden](https://huggingface.co/Norod78/sd-simpsons-model/raw/main/examples/00496-2202810362-A%20beautiful%20hungry%20demon%20girl,%20John%20Philip%20Falter,%20Very%20detailed%20painting,%20Mark%20Ryden.jpg)
29
+
30
+ **Gal Gadot, cartoon**
31
+
32
+ ![Gal Gadot, cartoon](https://huggingface.co/Norod78/sd-simpsons-model/raw/main/examples/00323-2574793241-Gal%20Gadot,%20cartoon.jpg)
33
+
34
+ ## More examples
35
+
36
+ The [examples](https://huggingface.co/Norod78/sd-simpsons-model/tree/main/examples) folder contains a few images generated by this model's ckpt file using [stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) which means their EXIF info contain the parameter used to generate them
37
+
38
+ ## Sample code
39
+
40
+
41
  ```py
42
  from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
43
  import torch
 
63
  image.save("astronaut_rides_horse.png")
64
  ```
65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
  ## Dataset and Training
67
 
68
  Finetuned for 10,000 iterations upon [Runway ML's Stable-Diffusion v1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5) on [BLIP captioned Simpsons images](https://huggingface.co/datasets/Norod78/simpsons-blip-captions) using 1xA5000 GPU on my home desktop computer